Deepfake as new digital threat

14 June 2022
Since entering the vast world of content on the web, Deepfake videos have established themselves as a highly used trend. For those who havenβt heard of it yet, these are videos that use deep learning to accurately reconstruct the face of each type of person. Just from the union of this term with βfakeβ, this new term has been coined. Thanks to artificial intelligence, which uses a series of precise algorithms, you are able to faithfully reproduce the facial features of a certain character, in our case with the aim of manipulating the appearance of people by stealing their identity. Many public figures are victims of this new threat: from Renzi to the most popular Mark Zuckerberg, to the recent scam that uses the face of Elon Musk to convince users to subscribe to an online trading platform.
Not only for VIPs
Although the preferred victims to create fake news or, even worse, to create pornographic content are the so-called VIPs, the new threat is also affecting less exposed users. The attackers in fact can reproduce facial features and gestures using advanced systems of artificial intelligence to be applied to photos, selfies and short videos easily found on social media. The number of cases is also increasing, with the consequent impossibility for platforms and social networks to become aware of such fake videos. Especially when these videos go under the radar. What is more worrying is the spread of pornographic clips made using faces of well-known characters but also of ordinary people, without their consent. Videos that you can post on much more complicated adult sites to monitor. To complicate the issue, even the possibility is open to anyone to make this type of video simply by using apps available on classic stores.
Recognize a Deepfakeβs video
Given the rather worrying situation, the new requirement required by the network is to be able to recognize a fake video from an official announcement. What, you donβt know how? It is true it is not at all simple, but there are still some elements to be considered as real alarm bells:
- unnatural movements
- lighting changes at frame changeΒ
- changes in skin tone or eye color
- abnormal or absent blinking
- poor text-labial synchronization
- strange or imperfect elements in the image
- low reliability of the sources