Doctored videos or deepfakes have been one of the key weapons used in propaganda battles for quite some time now. Donald Trump taunting Belgium for remaining in the Paris climate agreement, David Beckham speaking fluently in nine languages, Mao Zedong singing ‘I will survive’ or Jeff Bezos and Elon Musk in a pilot episode of Star Trek… all these videos have gone viral despite being fake, or because they were deepfakes.
Last year, Marco Rubio, the Republican senator from Florida, said deepfakes are as potent as nuclear weapons in waging wars in a democracy. “In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles. Today, you just need access to our Internet system, to our banking system, to our electrical grid and infrastructure, and increasingly, all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply,”
The potential danger of deepfakes lies in the fact that the level of manipulation is so perfect that it can be seemingly impossible at times to distinguish them from real videos. And the more difficult it becomes to detect the falsity, the greater the threat it possesses to pass off as real and cause the havoc it intends to. But with more sophisticated tools powered by artificial intelligence available now to produce these videos, is it becoming more difficult to detect deepfakes?
What are deepfakes and how are they created?
Deepfakes constitute fake content — often in the form of videos but also other media formats such as pictures or audio — created using powerful artificial intelligence tools. They are called deepfakes because they use deep learning technology, a branch of machine learning that applies neural net simulation to massive data sets, to create fake content.
It employs a branch of artificial intelligence where if a computer is fed enough data, it can generate fakes which behave much like a real person. For instance, AI can learn what a source face looks like and then transpose it onto another target to perform a face swap.
The application of a technology called Generative Adversarial Networks (GAN), which uses two AI algorithms — where one generates the fake content and the other grades its efforts, teaching the system to be better — has helped come up with more accurate deepfakes.
GAN can also come up with computer-generated images of fake human beings, which has been used by a website called ‘This Person Does Not Exist’. This makes it virtually impossible to detect whether the videos or images we see on the Internet are real or fake.
Deepfakes can be really difficult to detect. For instance, many people had fallen for Tiktok videos of Tom Cruise playing golf which were later revealed to be deepfakes.
Is it becoming more difficult to detect deepfakes?
A paper presented at the Winter Conference on Applications of Computer Vision 2021 describes a new technique which renders deepfakes more foolproof, making it difficult for traditional tools to detect them.
“Current state-of-the-art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector,” states the paper titled ‘Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples’
“Adversarial inputs are slightly modified inputs such that they cause deep neural networks to make a mistake. Deep neural networks have been shown to be vulnerable to such inputs which can cause the classifier’s output to change. In our work, we show that an attacker can slightly modify each frame of a deepfake video such that it can bypass a deepfake detector and get classified as real,”
What are the threats posed by deepfake videos?
With a proliferation of deepfake videos, there is a growing concern that they will be weaponised to run political campaigns and can be exploited by authoritarian regimes.
In 2019, a research organisation called Future Advocacy and UK Artist Bill Posters created a video of UK PM Boris Johnson and Labour Party leader Jeremy Corbyn endorsing each other for the prime minister’s post. The group said the video was created to show the potential of deepfakes to undermine democracy.
Also, last year, before the Delhi Assembly polls, videos of Delhi BJP president Manoj Tiwari speaking in English and Haryanvi went viral. In these videos, Tiwari was seen criticising Arvind Kejriwal and asking people to vote for BJP. The videos, which were shared in over 5,000 WhatsApp groups, were later revealed to be deepfake, digital media firm Vice reported.
Deepfakes are also a cause for concern at a time when WHO has stated that the Covid-19 crisis has triggered an infodemic and there have been “deliberate attempts to disseminate wrong information to undermine the public health response and advance alternative agendas of groups or individuals”.
Moreover, doctored videos — which includes manipulating the content by using incorrect date stamp or location, clipping content to change the context, omission, splicing and fabrication — are increasingly used nowadays on social media to deliberately misrepresent facts for political ends. Most of these videos are not examples of deepfakes but show how easy it can be to obfuscate facts and spread lies based on manipulated content masquerading as hard evidence.
The other big concern about deepfake videos is the generation of nonconsensual pornographic content. In 2017, a user deployed a face-swapping algorithm to create deepfake pornographic videos of celebrities such as Scarlett Johansson, Gal Gadot, Kristen Bell and Michelle Obama, and shared them on a Reddit threat called “r/deepfake”. The account had nearly 90,000 subscribers by the time it was taken down in February next year.
Of the thousands of deepfake videos on the Internet, more than 90% are nonconsensual pornography. One of the most horrifying AI experiments last year was an app called DeepNude that “undressed” photos of women — it could take photographs and then swap women’s clothes for highly realistic nude bodies. The app was taken down after a strong backlash.
Also, as it is being widely reported, deepfake videos are being increasingly used to generate revenge porn by spurned lovers to harass women.
“The threat posed by Deepfake videos is already apparent,” Neekhara and Hussain told indianexpress.com. “There are malicious users using such videos to defame famous personalities, spread disinformation, influence elections and polarise people. With more convincing and accessible deepfake video synthesis techniques, this threat has become even bigger in magnitude,” they added.
Is there a crackdown in the offing?
Most social media companies such as Facebook and Twitter have banned deepfake videos. They have said as soon as they detect any video as a deepfake, it will be taken down.
Facebook has recruited researchers from Berkeley, Oxford, and other institutions to build a deepfake detector. In 2019, it held a Deepfake Detection Challenge partnering with industry leaders and academic experts during which a unique dataset consisting of more than 100,000 videos was created and shared.
However, not all deepfakes can be detected accurately and it can also take considerable time for them to be found and taken down. Moreover, a lot of pornographic sites do not exercise the same level of restrictions.
Neekhara and Hussain said, “To detect deepfake videos more accurately, we need adversarially robust models by incorporating an attacker while training such deepfake detection models. A long-term solution is watermarking or digitally signing the images and videos from the device they are captured. The watermark or digital signature should get disrupted if the deepfake techniques like face swaps are applied. A deepfake detector can then just verify the signature or the watermark. However, this would require establishing a watermarking standard across all cameras and mobile phones. Therefore, it can be a while until this becomes a reality.”