The first video by Tom Cruise on the TikTok platform, which was published at the end of February, does not show the actor himself. Nevertheless, it became a viral hit on the Internet and worries the professional world alike. The reason for this: It is a so-called deepfake video.
Up to now, videos were considered to be much less susceptible to manipulation than photos, which nowadays almost anyone can edit digitally with Photoshop. This could change now.
Deepfakes are videos in which a computer generates a person’s face. A software is fed with as many photos as possible of the person who is to be depicted. As reported by stern.de, an algorithm uses the images to learn to imitate the person’s facial expressions, mouth and eye movements exactly.
The program then places the result over another person’s face like a digital mask in a video recording.
Since many photos of actors and politicians are generally available to the public, they are particularly suitable for such deceptions. In principle, using deepfakes it would be possible, for example, to put statements into the mouth of a leading statesman, public figure or politically active person that they have never made.
While the technology was relatively immature two years ago, it is now becoming increasingly difficult to detect deepfakes.
Only if you look closely will you notice some inconsistencies. This is how the real Tom Cruise looks much older at 58. There are also small image errors: when you take off the sunglasses, a frame of the glasses disappears for about a second. The voice also sounds different.
The makers of the clip claim they wanted to draw public attention to the subject. In the meantime, the videos of the user “deeptomcruise” have been deleted from the TikTok channel. Internet security experts are still concerned about the use of artificial intelligence in social media and further developments.
In the US, the FBI recently issued a warning against realistic deepfakes. The range of dangers ranges from harmless mischief to crime and true cyber wars. The tagged videos could be used to attempt fraud by email, possibly even to scoop out military information. Employees of a company could also be tricked and persuaded to bypass normal security precautions and divulge sensitive information.
Author Nina Schick, who dealt with the topic, defines deepfakes as videos that have been manipulated or completely generated by artificial intelligence. She expects that within the next ten years our information ecosystem will be literally flooded with artificially generated media content.
It is becoming increasingly difficult for the public to recognize what is real and what has only been synthesized on the computer. On the one hand, artificial intelligence has the ability to generate something completely new, but on the other hand it also has the ability to copy the appearance of existing people down to the smallest detail.
According to Nina Schick, anyone who has a public profile on a social network is at risk of falling victim to deepfakes. They don’t necessarily have to be well-known personalities.
“On the one hand, technological progress is accelerating rapidly. The second worrying thing is that everyone will have access to the technology. Not only Hollywood studios will be able to produce deepfakes, but everyone who will own a smartphone in ten years and download a corresponding app, ”says Schick with conviction.
She advises staying vigilant and critical without becoming cynical. At the same time, it demands technical solutions that enable users to distinguish real videos from fake ones.
Entrepreneur and cyber security specialist Rachel Tobak is also calling on platforms such as TikTok to use software that recognizes deepfakes and also identifies them. In their view, deepfakes undermine public trust. They offer cover and plausible excuses to criminals who are convicted by video recordings. The fake videos would be used to manipulate, humiliate and hurt people.
One possibility to prevent visual manipulation would be visibly conspicuously verified accounts of celebrities, even if they were not active on the platform. Unverified accounts, such as those from “deeptomcruise”, would then automatically be suspected of spreading deceptions.