Man wearing smart glasses with virtual scanning technology (ropixel/freepic)
‘A Zone a Pessima’, the pilot episode of the new season, reveals the dangers of artificial technology and the pitfalls of image rights on the Internet
On June 15, Netflix launched the new season of Black Mirror, one of the streaming platform’s biggest hits since its inception. Pioneering audiences in several countries, including Brazil, the work typically proposes a dystopia that blends current discussions about the Internet, artificial intelligence, the use of social networks, and other technology-related topics. The pilot episode of the new season, “A Joan a Pessima”, features veteran Salma Hayek and Canadian actress Annie Murphy, both playing different versions of Joan, who is the protagonist.
What surprised many fans is that the first episode mocks Netflix with a parody of a streaming service called “Streamberry”, a site with terms of privacy and abusive rights, and which gives rise to the plot behind Joan’s loss of rights over her image. Through an artificial intelligence technique called “deepfakes”, Joan unfolds her life in a real-time chronological series with ongoing and daily updates on Streamberry.
Understand “A Joan A Pessima”, Episode 1 (Contains Spoilers)
Schitt’s Creek actress Annie Murphy plays Joan, a technology CEO, in “Joan Is Awful”. Also, the series references stressful moments in Joan’s daily life: the businesswoman illegally firing an employee (played by Ayo Adebiri from The Bear), talking to her therapist about not liking her fiancee Kris’s (Annie Nash) cooking, and kissing her ex-boyfriend Mack (Rob Delaney) in a moment of weakness.
At the end of the day, she sits down to watch Streamberry, which is a replacement for Netflix, and suddenly she finds a new show called “Joan Sucks”. Curious, she and Kris watch the first episode which turns out to be a dramatization of their day, starring Salma Hayek as Joan. At the end of the episode, Hayek’s “Joan” calls Streamberry to find Cate Blanchett.
The way she is portrayed immediately costs Joan her job and her relationship. After talking to her lawyer about the offending chain, Joan was told that she waived her rights when she agreed to Streamberry’s terms and conditions. The show is created using artificial intelligence, images and deepfake technology, which captures information from your cell phone’s microphone.
This means that Salma Hayek herself is not involved in the production, she has only given Streamberry permission to use her likeness. In an attempt to pressure Hayek into ending the show, Joan defecates in a church, repeating the same act to attract attention and scare Salma, who would later play her in the series. Eventually, together they destroy the company’s quantum computer that creates the material.
But what exactly is a deepfake?
Deepfakes are computer-generated artificial videos that combine images to create new images that depict events, statements or actions that never happened. The results can be quite reassuring. Deepfakes differ from other types of false information in that they are very difficult to identify as false.
The technology comes from the work of machine-learning (“machine learning”), a subset of artificial intelligence (AI) that focuses on building systems that learn or improve their performance based on the data they consume. The algorithm is fed examples and learns to produce outputs similar to the examples it learned from.
Deep learning is a special type of artificial learning that involves “hidden layers”. Typically, deep learning is performed by a special class of algorithms called neural networks, which are designed to replicate the way the human brain learns information. A hidden layer is a series of nodes within the network that perform mathematical transformations to convert input signals to output signals (in the case of deepfakes, to convert real images into quality fake images).
The more hidden layers a neural network has, the “deeper” the network is. Neural networks, and in particular Recurrent Neural Networks (RNNs), are known to perform very well in image recognition tasks.
There are actually two algorithms involved in the process of creating a complex deepfake. An algorithm is trained to produce the best possible simulated replicas of the real images. The second model is trained to detect when an image is fake and when it is not. Both models iterate, each improving on their respective functions. By pitting the models against each other, you end up with a model that is extremely skilled at creating false images; So efficient, in fact, that humans often can’t tell that a result is false.
And what’s the problem with that?
Most people today get their information and form opinions based on Internet content. Therefore, anyone with the ability to create a deepfake can spread disinformation and influence the public to behave in a way that furthers the fraudster’s personal agenda in some way. Deepfake-based misinformation can wreak havoc on both a micro and a macro level.
For example, on a smaller scale, deepfakers can create personalized videos showing a relative asking for large sums of money to help them out of an emergency and send them to unsuspecting victims, deceiving netizens on an unprecedented scale.
On a larger scale, fake videos making false claims about prominent world leaders can incite violence and even war, swaying voters and political groups.
What can be done?
Technology is not yet the Internet’s biggest problem, but it is likely to increase in prevalence and quality in the coming years. This doesn’t mean you can’t trust any image or video, but you should start training yourself to be more aware of fake images and videos, especially when videos ask you to send money or personal information or make unusual claims.
Interestingly, artificial intelligence may be the answer to detecting deepfakes. Models can be trained to recognize spurious images in dimensions that the human eye cannot detect.