What are deepfakes?

Deepfakes refer to videos, images, or audio clips that use artificial intelligence to realistically fake people’s faces, voices, or actions. Powered by deep learning models, these synthetic creations can look and sound real. There are several types of deepfakes:

  • Attribute Manipulation: Edit elements of a face, e.g. age, skin, hair – you can find this on many social media filters
  • Identity swap: swap a person’s face with another – you can find this on social media filters very often
  • Face/Body Reenactment: Move another person’s face or body in a selected
    picture, video or 3D model – like an avatar or a puppet
  • Synthesized identity: A completely generated identity

How do they work?

The technology works by training an algorithm on large amounts of data—like photos or recordings of a person—so it can learn how their face moves, how they speak, and how they express emotions. Then, using this information, the AI creates a digital version of the person and can realistically make them appear to say or do things they never actually did. Deepfakes are made with powerful tools like deep learning and neural networks, often fooling the eye and ear with disturbingly realistic results.

AI can generate entirely fake human faces that look real—even though the person doesn’t exist. This is done using a type of AI called a Generative Adversarial Network (GAN), which learns from millions of real photos to understand how human faces are structured. One part of the AI creates fake faces, while another part tries to detect the fakes, pushing the system to improve.

Over time, the AI becomes so good that it can generate new, realistic-looking faces by blending features from different people. These synthetic faces are often used in media and entertainment, but they can also be misused for scams or fake online identities.

Risks

While deepfakes can have fun and creative uses (like swapping your face in a movie scene), they also pose serious risks. The technology is still developing, and deepfakes are becoming widespread at an impressive rate. Unfortunately, they are often used to deceive people rather than for recreational purposes.

  • Misinformation: deepfake can distort one’s discourse, whether it is in a political context or not. Image a “doctor” providing information around the risk of a contamination…
  • Fraud: Fake voices or videos can trick banks, employers, or family members
  • Reputation threats: Faked videos of public figures or private citizens
  • Breach of Trust: It is getting hard to tell the real from the fake in the digital age
  • Fast spread of misinformation: With social media, it is easy to spread fake news and reach a very wide audience

Detecting Deepfakes?

Detecting a deepfake isn’t always easy—but there are clues. Experts look at both visible signs (like unnatural facial movements, flickering, odd lighting, or improbable events) and invisible clues (such as strange noise patterns or pixel artifacts that our eyes can’t detect).

As deepfakes become more realistic, it’s important to stay critical: check the source, look out for visual or audio glitches, and if you’re unsure, try interacting—ask the person to move or obstruct their face.

When in doubt, let AI help verify whether the content is authentic.

EPFL Contributions

As a research institution, we are developing technological solutions to combat deepfakes, but we also need to raise public awareness – the technological approach alone doesn’t work. Here is a selection of work that our labs and centers are working on to study identify deepfakes and study their effects:

Share