You are here:

Deepfakes & Misinformation: How dangerous are deepfakes?

Ever since the first circulation of deepfake videos, many publications have been made that mainly portrayed deepfakes as a harmful type of technological product that could threaten society and harm individuals. However, are deepfakes truly as dangerous as they are sometimes thought to be? And if so, what dangers could we expect in the future? This is the last blog of a four-part series in which different applications and impacts of deepfake technologies will be explored.

Ever since the first circulation of deepfake videos, many publications have been made that mainly portrayed deepfakes as a harmful type of technological product that could threaten society and harm individuals. However, are deepfakes truly as dangerous as they are sometimes thought to be? And if so, what dangers could we expect in the future? This is the last blog of a four-part series in which different applications and impacts of deepfake technologies will be explored.

Thumbnail 'A demonstrative deepfake video of Obama made by Buzzfeed in 2018. Source: Buzzfeedvideo/YouTube'

A demonstrative deepfake video of Obama made by Buzzfeed in 2018. Source: Buzzfeedvideo/YouTube

Comparison of the real Leonid Volkov (left) and the impostor (right)/Source: Twitter.com/leonidvolkov

Comparison of the real Leonid Volkov (left) and the impostor (right)/Source: Twitter.com/leonidvolkov

At the end of April 2021, shocking news made the headlines: multiple European MPs had been tricked with deepfake video calls by individuals who imitated Leanoid Volkov, a Russian opposition figure and ally of Alexei Navalny. About a week later, however, two Russian men claimed responsibility for the video calls, indicating that they did not use any type of deepfake technology but just looked very similar to the real Volkov. As can be seen here, even though deepfakes were not involved in this scandal, the concept of it still seems to ignite a concern for potential fraudulence and misinformation.

Considering that deepfakes do, indeed, have the potential to dissolve the line between truth and fiction, several arguments can be given as to why they are considered dangerous for democratic societies. First of all, some people believe that deepfake technologies could be weaponised and used to weaken a democracy by, for example, fabricating a politician’s appearance and let that person say or do things they never did. Moreover, they also allow for important public people to deny any slightly controversial statement under the guise of deepfakes, even if they are actually true.  The reason that this could be exceptionally dangerous for a democratic society is that it could potentially increase the amount of misinformation that people are exposed to online. 

A second concern about deepfakes that has been raised is the potential danger that they could have for regular individuals within a society. Recently, for instance, a legal expert warned about the rising dangers of deepfake pornography, indicating that anyone can now easily create such content as the algorithms behind it have can now create accurate looking footage as never before. Deepfakes, thus, seem to induce quite some worry within society in terms of their harmful potentials.

Misinformation in the Dutch Society

Before diving into deepfakes specifically, let’s take a closer look at the concept of misinformation. To explore this topic, I spoke to Annique Mossou, an open-source investigator at Bellingcat, who has recently been doing research on QAnon and conspiracy theories on Covid-19. She explained that, during her work, she noticed how some people simply tend to be very selective in the type of information that they believe, since they might very selectively look for information that is in line with their own ideas. “There is quite some space to do your own research nowadays, and people don’t always realise that the algorithms on social media and your own search engine terminologies will steer you towards a specific angle. Some people will just search for something so specific.”  She explains that people can, in principle, find all the information that they want to find on platforms such as Google when using specific words in a specific order or by placing them in quotation marks. Luckily, Bellingcat tries to disprove false stories or hoaxes whenever they are distributed immensely. “We try to either verify or disprove certain information and we’ll then write an article on it. However, we do not always want to attract too much attention to certain cases of misinformation or disinformation. Only when a specific hoax turns out to circulate on a huge scale, and we can prove that it is not a true story, we will write an article on it.”

When asked whether she thought that deepfake videos could somehow influence this phenomenon of misinformation within society, Mossou answered: “Although I don’t think that the danger of deepfakes is necessarily exaggerated, I do think that some people are just very selective in the information that they believe. If I would pick up a camera and film myself whilst very convincingly promoting a certain false idea, there will be people who believe me, so I don’t think deepfakes are needed to spread misinformation. It doesn’t matter if I say it, if you say it or if Mark Rutte allegedly said it… some people will just believe it as long as it fits with their own expectations.” Mossou also mentions that quite some misinformation tends to be spread on alternative media platforms rather than Facebook or Instagram, since those mainstream platforms are applying and developing fact-checking functions. “Die-hard hoax or conspiracy theory believers will often turn to other platforms such as Gab or if Youtube deletes a specific video, they will just upload it on BitChute. So even if you try to get rid of misinformation on social media, you’re pushing some people away towards alternative platforms.” She mentions that it is, therefore, important to always use your common sense when you encounter information online.

Deepfakes & the Dutch Society

Perhaps, then, deepfakes are just prone to become a part of a larger issue of misinformation within society. If this is the case, are deepfakes then truly always as threatening and dangerous as they are thought to be? And what is it that we should look out for specifically when talking about deepfake media content? To find this out, I contacted two researchers who have recently been analysing the effects of deepfakes on society and individuals. 

I first spoke to Pieter van Boheemen, a researcher at the Rathenau Institute, which is a Dutch organisation that examines the political opinions on technology advancements in the Netherlands by analysing what kinds of technologies are emerging and what political questions and public interests or public rights are connected to that. In one of their recent reports, named Digital Threats to Democracy: On new technology and disinformation, an investigation was done on new technological developments and the possible influence that they might have on the production and spread of disinformation within the Netherlands. Deepfakes were one of the technologies that had been examined during this research. According to Van Boheemen, it seems that the risk of deepfakes in the Netherlands will probably mainly be found in private spheres and less so in mass media communications or public debate. He indicated that several studies found that deepfakes did not have any greater effect on people compared to manipulated texts or other forms of misinformation. “Some parties tend to use very powerful terms and bold statements like ‘infocalyps' and ‘infodemic,’ as if the world will perish due to misinformation. However, I don’t think people in the Netherlands will suddenly be completely fooled by those videos and deepfakes are likely not going to end our democratic society.”  Van Boheemen did point out, however, that there might be some impacts on society due to deepfakes: “In private online communication spheres, such as Whatsapp or Telegram channels, there are fewer opportunities to monitor and disprove the messages that are being spread. If you publicly show manipulated content, there is a huge chance people will find out rather quickly that the information is actually false. We, therefore, predict that deepfakes will probably have more impacts within private communication channels and less so within mass communication.”

However, what kinds of risks would those deepfakes bring within such private spheres? Van Boheemen thinks that the risk within the Dutch society will likely be found in the act of damaging the reputation of regular individuals, rather than the intentional spread of political or societal misinformation. “Influencing public discourse by means of deepfakes or manipulated media content will be much more difficult, especially when you create a deepfake of someone who is still alive and who can immediately react to it. If you are a public persona, you have a sort of stage that you can use to defend yourself. However, individuals in privacy settings are a lot more vulnerable to deepfakes. Think about slander and libel, pornographic deepfakes, intimidating or blackmailing people… damaging people in private spheres in all sorts of way. For example, it seems as though deepfakes are currently especially targeting women to harm their reputation with sexual deepfakes. It is much harder to defend yourself as a non-public individual to such deepfakes.”

Tom Dobber, a researcher at the faculty of Social and Behavioural Sciences of the University of Amsterdam who recently conducted research on deepfakes and their effects on political attitudes, expressed similar ideas as Van Boheemen. He stated that, in the Netherlands, there is no need to fear “political deepfakes, in which words will be put in the mouth of politicians to damage their reputation.” He explained that, since it was discovered that the alleged deepfake of Nalavny’s chief of staff turned out not to be a deepfake, there have been “zero serious political deepfakes circulating” in the Netherlands. Moreover, he indicated that it has been possible for several decades to manipulate visual materials, since photoshop was already invented in 1987 and made it much easier for regular people to potentially damage someone’s reputation. He mentioned that, “although this could, in theory, be life-threateningly dangerous, there has been no point in time that this has lead to significant incidents”. Dobber believes that the “alarmism about deepfakes” could potentially lead to instances in which all information will be considered suspicious in advance, which would be counterproductive. He indicated that, in the past, people who would encounter disinformation would usually examine this via sources that would systematically gather information, such as journalistic references. However, nowadays, many alternative channels have been developed that do not use any systematic methods to gather information, or sometimes even spread disinformation instead. According to Dobber, this change in “information context” is the real problem, not the fake news messages, photoshop or deepfake content that have increasingly been found online. He indicates that it would probably be best to accept that, every now and then, deepfakes might emerge and that people should be stimulated to think about how certain information has been developed instead. He also says that it will be important not to encourage people to be suspicious of all informational media content. Instead, we should put trust in our journalistic sources, since they will be able to find faulty information and rectify the misinformation that has been spread.

Thumbnail 'You Won’t Believe What Obama Says In This Video! ?'

So how dangerous are deepfakes in terms of misinformation?

It seems that deepfakes are perhaps not completely disastrous for our Dutch democratic society. Although they could potentially be used to spread misinformation, there have been no serious indications that deepfakes might significantly increase the number of false news stories or negatively influence the political debate in the Netherlands. However, it might be important to be aware of the possible consequences that deepfakes might have on us as individuals. Digital fraud, online scams, phishing attacks, and cyberbullying sadly have been part of many people’s lives already and it seems that deepfakes could perhaps improve the realism of such cybercrime content. “You might perhaps one day get a manipulated voice message on WhatsApp that seems to be the voice of one of your family members, even though it isn’t. Those kinds of things could potentially happen in the future. But I think people will become more aware of that, they will start to understand that this is just a new development of synthetic media and a new part of life,” according to Van Boheemen. So how dangerous are deepfakes exactly? Perhaps in terms of political or societal misinformation, we do not need to worry too much yet, at least not in the Netherlands. “However, if your sister calls you and suddenly starts blackmailing you, it might be better to ring her back to check whether you were truly speaking to her,” says van Boheemen. 

My name is Rebecca Haselhoff, an MA student of Media Studies: Digital Cultures at Maastricht University. I’m doing a research internship at Beeld & Geluid, focusing on deepfake technologies and the different ways in which deepfakes can be used and what impacts they can have on society.