With generative AI, it is possible to very realistically change people’s faces, bodies and voices to make them do and say things they have never actually done or said. AI can be used to create images that look like real photographs of anything, or videos that look like real photographs of anything. When this kind of AI technology is used to mimic reality, it is called deepfake.
Deepfakes make use of deep neural networks. They are machine learning models that mimic the principles of biological neural networks. The technology has advanced in leaps and bounds in recent years, thanks in part to the GAN model introduced by US AI researcher Ian Goodfellow and his team in 2014 [i]. In this model, one neural network acts as an evaluator of another and aims to distinguish genuine outputs from those created by a machine. In this way, the neural networks teach themselves. The goal is that the generating network learns to produce fakes so good that the evaluating network no longer distinguishes them from genuine materials. Increasingly powerful deep forgery applications of the GAN model are still being developed, and forgeries are also being made using other models.
From the entertainment industry to the dissemination of disinformation, deepfakes are used for a wide range of purposes. Deepfake technology can be used to create a digital twin of anyone, for example a deceased actor, who you want to bring back to the big screen. It is also widely used, for example, in the audio book market, where an AI voice reads a book instead of a real person, sounding completely real. And in the advertising market, where the same advertisement can be easily translated into, say, nine different languages.
Deepfake technology has also enabled a lot of bad things to happen. It can be used to create, for example, a sex video that looks like the real thing, without the person’s consent, from any person about whom images or video material is available. It is now widely used to shame people, especially women and girls, all over the world. Doctored, fake-looking videos are also used to bully schools and put pressure on journalists [ii].
Human voice cloning has also been used to cheat parents out of money by calling them with their own child’s voice [iii]. And tricked a company employee into transferring a large sum of money to an account where it should never have been transferred in the first place [iv]. Examples abound.
Today, almost without exception, deeply distorted images, videos and audio tapes are also distributed during major news events. In elections, deep fakes have been used to blind politicians and undermine confidence in electoral systems. In the US, an AI voice cloned from Joe Biden called people in the run-up to the 2024 presidential election and urged them not to vote [v]. In the same election, an attempt was made to paint Tim Walz as a paedophile by publishing a deeply distorted video online in which a fake “former student” of Walz spoke to a camera about his horrific experiences [vi]. The list of such examples is endless, and Finland has not been immune to fakes.
In order for deepfakes to cause damage, they need to be able to be spread via messaging services and social media. Digital giants such as Meta, Tiktok, Google and Microsoft have been called upon to take responsibility for curbing the spread of deep counterfeiting.
For deepfakes to cause damage, they must be able to spread through messaging services and social media. Digital giants such as Meta, Tiktok, Google and Microsoft have been called upon to take responsibility for curbing the spread of deepfakes.
Identification of deep fakes
Today, it is almost impossible to distinguish a cleverly made deepfake from the real thing. The same can be said in reverse: it has become increasingly difficult to authenticate genuine video as genuine video. As a result, in many situations, perfectly authentic material has been accused of being faked. This can have serious consequences for the dissemination of correct information and for what people ultimately believe.
There is a constant race between the development of deepfakes and their detection. The detection technology is always one step behind.
Until recently, you could identify a deepfake if an AI created too many or too few fingers for people, or if it stumbled over earplugs and glasses. Gradually, however, such small errors have been ironed out, and deepfakes created with good AI applications are now hard to distinguish from the real thing. Voice forgeries have also become almost flawless.
The general guidelines below will help you assess the reliability of information you receive online - even when it is unclear whether or not the material is deeply distorted:
- If an image, video or audio clip you receive on social media evokes a strong emotional response, the first thing to do is to stop, think and think again before believing the material and sharing it.
- Consider three questions. Who has posted the photo/video/audio tape? What evidence has been presented to support it? What other sources have to say about it?
- Use a lateral reading approach: have reliable sources reported on this? Has anyone questioned the authenticity of the material in the comments? Can you find other photos/videos/material from the same event?
- Put a screenshot of the image or video into a reverse image search (e.g. Google Lens) and see where the footage has been circulated in the past.
- Be sceptical, especially if a scandalous photo, video or audio tape starts circulating on social media just before the election. If necessary, contact the authorities, fact-checkers or other journalistic media.
A solution to this situation has also been sought in the form of detection technologies, which can be found online under the search term “deepfake detector”. However, the FaktaBaari does not recommend relying on them alone. There is a constant race between the development of deepfakes and their detection, with detection technologies always one step behind [vii]. Identification tools may mistakenly mistake genuine material for counterfeit, or counterfeit material for genuine [viii].
Algorithms that try to detect fakes are only as good or bad as the data they are trained on. Recognition technology also has its biases and may not recognise fakes of people with dark skin, for example, or be confused if a genuine image or video has been shrunk or even blurred [ix]. The quality of recognition tools varies enormously, and just as money is made with all other AI tools, it is also made with recognition tools.
Thus, identification tools can be tried as part of the fact-checking process, but they should be treated as helpers rather than experts. It is worth trying to familiarise yourself with the source material on which the identification tool has been trained, who developed it and what its limitations are. Never take the assessment of the detection tool as fact, but also look for information from other sources.
As with any other technological development, it is worth keeping your knowledge up to date with identification tools.
Read the entire AI Guide for Teachers here.
Sources
[i] Cf. MIT Technology Review (2018). The GANfather: The man who’s given machines the gift of imagination, https://www.technologyreview.com/2018/02/21/145289/the-ganfather-the-man-whos-given-machines-the-gift-of-imagination/
[ii] BBC (2024). Inside the deepfake porn crisis engulfing Korean schools https://www.bbc.com/news/articles/cpdlpj9zn9go; BBC (2024). Girl, 12, victim of ‘deepfake’ bullying porn image https://www.bbc.com/news/articles/ckvgezk74kgo; VOA (2024). Journalist ‘haunted’ by AI deepfake porn video https://www.voanews.com/a/journalist-haunted-by-ai-deepfake-porn-video/7624281.html
[iii] Guardian (2023). US mother gets call from ‘kidnapped daughter’ – but it’s really an AI scam https://www.theguardian.com/us-news/2023/jun/14/ai-kidnapping-scam-senate-hearing-jennifer-destefano
[iv] Guardian: Company worker in Hong Kong pays out £20m in deepfake video call scam https://www.theguardian.com/world/2024/feb/05/hong-kong-company-deepfake-video-conference-call-scam
[v] Reuters (2024). Consultant fined $6 million for using AI to fake Biden’s voice in robocalls https://www.reuters.com/world/us/fcc-finalizes-6-million-fine-over-ai-generated-biden-robocalls-2024-09-26/
[vi] Wired (2024). Russian Propaganda Unit Appears to Be Behind Spread of False Tim Walz Sexual Abuse Claims https://www.wired.com/story/russian-propaganda-unit-storm-1516-false-tim-walz-sexual-abuse-claims/
[vii] Binh, Le, Shahroz Tariq, Sharif Abuadbba et al. (2023) Why Do Deepfake Detectors Fail?, https://www.researchgate.net/publication/368843207_Why_Do_Deepfake_Detectors_Fail
[viii] Cai, Zhixi and Ghosh, Shreya and Adatia, Aman Pankaj et al. (2024). AV-Deepfake1M: A Large-Scale LLM-Driven Audio-Visual Deepfake Dataset https://arxiv.org/pdf/2311.15308
[ix] University at Buffalo. (2024) New deepfake detector designed to be less biased, https://www.buffalo.edu/home/story-repository.host.html/content/shared/university/news/ub-reporter-articles/stories/2024/01/lyu-deepfake-bias.detail.html
Read the entire AI Guide for Teachers here.