
By Zac Amos
Technology provides opportunities to positively impact the world and improve lives.
Related: Why facial recognition ought to be regulated
It also delivers new ways to commit crimes and fraud. The U.S. Federal Bureau of Investigation (FBI) issued a public warning in June 2022 about a new kind of fraud involving remote work and deepfakes.
The making of Deepfakes
The world is on track to see around 50% of workers transition to sustained, full-time telecommuting. Conducting job interviews online is here to stay, and deepfakes may be part of that new normal.
The term refers to an image or video in which the subject’s likeness or voice was manipulated to make it look like they said or did something they didn’t.
The deepfake creator uses “synthetic media” applications powered by machine learning algorithms. The creator trains this algorithm on two sets of videos and images. One shows the target’s likeness as they move and speak in various environments. The second shows faces in different situations and lighting conditions. The application encodes these human responses as “low-dimensional representations” to be decoded into images and videos.
The result is a video of one individual convincingly overlaid with the face of another. The voice is more difficult to spoof.