Deepfakes refer to videos, images, or audio clips that use artificial intelligence to realistically fake people’s faces, voices, or actions. Powered by deep learning models, these synthetic creations can look and sound real. There are several types of deepfakes:
The technology works by training an algorithm on large amounts of data—like photos or recordings of a person—so it can learn how their face moves, how they speak, and how they express emotions. Then, using this information, the AI creates a digital version of the person and can realistically make them appear to say or do things they never actually did. Deepfakes are made with powerful tools like deep learning and neural networks, often fooling the eye and ear with disturbingly realistic results.
AI can generate entirely fake human faces that look real—even though the person doesn’t exist. This is done using a type of AI called a Generative Adversarial Network (GAN), which learns from millions of real photos to understand how human faces are structured. One part of the AI creates fake faces, while another part tries to detect the fakes, pushing the system to improve.
Over time, the AI becomes so good that it can generate new, realistic-looking faces by blending features from different people. These synthetic faces are often used in media and entertainment, but they can also be misused for scams or fake online identities.
While deepfakes can have fun and creative uses (like swapping your face in a movie scene), they also pose serious risks. The technology is still developing, and deepfakes are becoming widespread at an impressive rate. Unfortunately, they are often used to deceive people rather than for recreational purposes.
Detecting a deepfake isn’t always easy—but there are clues. Experts look at both visible signs (like unnatural facial movements, flickering, odd lighting, or improbable events) and invisible clues (such as strange noise patterns or pixel artifacts that our eyes can’t detect).
As deepfakes become more realistic, it’s important to stay critical: check the source, look out for visual or audio glitches, and if you’re unsure, try interacting—ask the person to move or obstruct their face.
When in doubt, let AI help verify whether the content is authentic.
As a research institution, we are developing technological solutions to combat deepfakes, but we also need to raise public awareness – the technological approach alone doesn’t work. Here is a selection of work that our labs and centers are working on to study identify deepfakes and study their effects: