Team Rangar
Artificial intelligence (AI) excels at creating the "what" (the main face), but it consistently makes mistakes in the "how" and the "where" (the environment, physics, and fine details). Focus on these three "fingerprints" that AI leaves behind.
The AI applies the fake face to an original video, but often doesn't know how the new face should interact with the surrounding physical environment (lights and shadows).
The lighting on the face (the deepfake) doesn't match the light or shadows cast by the background (the original video). The face may appear unusually bright or dark compared to the surroundings. Look for small flickers or pixel distortions at hard-to-replicate edges, such as hair, glasses, or the contours of the face.
AI is trained with static images or predictable movements. Therefore, it struggles to replicate the complexity of the human nervous system.
Observa el movimiento del sujeto. El rostro y la cabeza se mueven de forma rígida, robótica o flotante, como si estuvieran desconectados del cuello y el cuerpo. El patrón de parpadeo es irregular, demasiado lento o completamente nulo. La mirada carece de la intención y el movimiento sutil de un ser humano real.
The process of generating fake audio and synchronizing it with video can be very complex, especially in real time.
The clearest sign is the desynchronization between the audio and the lip movements. The words seem to come out before or after the mouth moves. Cloned audio can sound overly clean or robotic, lacking the emotional intonation, pauses, and natural inflections of human speech.
Don't worry, we're here to help.