Menu
Psyche
DonateNewsletter
SIGN IN

An AFP journalist views a deepfake video at his news desk in Washington, DC, on 25 January 2019, manipulated with artificial intelligence to potentially deceive viewers. Photo by Alexandra Robinson /AFP/Getty

i

Why we should rethink our moral intuitions about deepfakes

An AFP journalist views a deepfake video at his news desk in Washington, DC, on 25 January 2019, manipulated with artificial intelligence to potentially deceive viewers. Photo by Alexandra Robinson /AFP/Getty

by Adrienne de Ruiter + BIO

Save

Share

Post

Email

Deepfakes unsettle established categories of deception and authenticity, but that doesn’t necessarily make them unethical

It seemed very out of character for the former US president. Barack Obama looked straight at the camera and, without hesitation, called Donald Trump a ‘total and complete dipshit’. Except, of course, he didn’t – it only appeared that he did. The trick was accomplished in 2018 by a new technology that allows for the creation of fake footage that appears convincingly, and unsettlingly, real: deepfakes.

Deepfakes have been used in a number of amusing ways online, such as the interview that Kim Joo-Ha, an anchorwoman from South Korea, conducted last year with a deepfake replica of herself (and who has since even stood in for her a couple of times reading the news). Deepfake techniques can be used to let comedians morph into the celebrities they imitate, make Jon Snow apologise for the final season of Game of Thrones, or show what you would look like dancing like a professional. Deepfake apps also make it possible for people to insert themselves into scenes from their favourite films by uploading a selfie and ‘swapping’ their face with that of an actor.

Despite these entertaining applications, deepfakes seem genuinely weird – and for that reason concerning. It’s not easy to pinpoint what precisely makes them so unsettling, though. Is it simply that we do not like to be deceived by footage that seems real but (we know) is fake? Are we worried about the implications of a technology that allows virtually anyone to fabricate convincing footage of others to represent them as doing or saying pretty much anything the maker desires? Or is there a deeper issue about the technology itself? Deepfakes, for better or worse, are here to stay: apps that make use of this technology are widely available, and will only become more so. That means it is incumbent upon us to think through our moral intuitions about this new and dangerous technology. What is it about deepfakes that’s so odd? And, given the strangeness of the technology, what should we do about it?

Two aspects are involved in our averse reaction to deepfakes: the uncanny feeling of witnessing something abnormal, and the unsettling feeling of being deceived by one’s own eyes. This first aspect concerns the uneasiness we feel in noticing that something is not ‘quite right’. There is something creepy about deepfakes that represent people in ways that are slightly ‘off’. Early deepfake videos contained unnatural eye-blinking patterns, for example, which viewers would not consciously notice but nonetheless signalled that something strange was going on. As deepfake technology improves and footage looks and sounds more like authentic material, this aspect of eeriness disappears. But another emerges: can we really trust what we’re watching?

This is the other side of the uneasy feeling that deepfakes arouse, which concerns footage that is too realistic-looking. Here deepfakes cause a sense of uneasiness because they make us distrust what we see with our own eyes. While trust in the reliability of video and images has already been undermined with the ascent of Photoshop and other forms of manipulation, the potential of deepfake technology to continuously improve the convincingness of inauthentic recordings through machine learning deepens the concern over deception. Not only can deepfake technology realistically represent people’s image and voice, it also allows for impersonation in real time. We can’t assume that the person we see or hear in digital footage is who we assume them to be, even if we seem to be interacting with them.

Although the link between deepfakes and deception is strong, not all deepfakes are deceptive in the sense that viewers take on false beliefs. If I insert my face into a scene from the movie Alien and my mother comes across the video, she wouldn’t think that I was actually on an alien-infested spaceship, or that I’d somehow been cast in the film. People want to see themselves virtually represented in ways that are strange or amusing – that is, in ways that are not necessarily unethical. If everything that was upsetting to some would be unethical for all, then few acts would be morally permissible.

The mere knowledge that deepfakes exist can lead to less trust and thereby make it easier for politicians to deny wrongdoing – even if caught on film

But even if a particular deepfake might not directly deceive the audience, it can still have deleterious consequences that easily make one uneasy: a single deepfake seems to illustrate that we can no longer simply trust any footage, even if it seems real. Deepfakes, particularly if they appear believable, undermine people’s general ability to trust in the reliability of footage. While this could be true, let us recall that trust in footage has not always been well founded anyway, given the diverse ways in which film or video footage can be manipulated without using deepfake technology. Furthermore, the fact that this technology raises doubts about the trustworthiness of footage doesn’t suffice to discredit all deepfakes, as we do not usually dismiss technologies as wrong altogether, just because they happen to have some negative side-effects.

The feeling of uneasiness that certain deepfakes evoke, then, does not seem enough for us to reject deepfakes outright. It’s important therefore to consider how deepfake technology is actually used – and these uses are certainly not all bad. For example, companies are working on deepfake techniques to ‘restore’ the voices of people who are unable to speak, due to conditions such as motor neurone disease, by synthetically reconstructing old recordings to build a ‘voice clone’. Deepfake techniques can also be used to assist people in mourning by allowing them to interact with digital replicas of their deceased loved ones. And museums and schools can use deepfakes to liven up education and exhibitions, as in the Dalí Museum in Florida, where the Spanish artist was virtually revived to speak to visitors through an interactive billboard.

Of course, the malicious uses of deepfakes cast a longer shadow, and these too must be assessed. Here, pornography leads the way: deepfake technology can insert someone’s picture into sexually explicit material without their consent. A study from 2019 found that 96 per cent of the deepfake videos available online contained nonconsensual deepfake pornography. Criminality also takes a new turn as deepfake impersonation or voice-spoofing is used to imitate CEOs and order large bank transfers. And concern about deepfakes’ impact on politics is well grounded as the technology could be used to sway elections or cause international conflict. Not only can deepfake material mislead people, the mere knowledge that deepfakes exist can lead to less trust and thereby make it easier for politicians to deny wrongdoing – even if caught on film.

So there are good grounds to be suspicious of this technology. Its track record is far from positive. Still, the fact that this technology can be used to do good indicates that it’s not inherently morally wrong. Deepfakes are thus morally suspect, but not necessarily unethical. If people want to use deepfake technology to place themselves in particular situations or engage in certain activities or forms of speech, this is morally acceptable, provided the deepfake is not made for malevolent ends and does not misuse source material. The moral character of deepfakes depends on the ends to which they are used, the way in which they are produced, and the willingness of the people figuring in them to be portrayed in this way. Deepfakes should thus not be rejected flat-out, although we better keep a close watch on them.

Deepfakes can also help us reconsider our moral intuitions about deception and authenticity. Entering into an era where the boundaries between the fake and the real are increasingly prone to blur due to fast-paced developments in the field of artificial intelligence, machine learning and digital communication, deepfakes help us consider these terms in a new light. Deepfakes can open up opportunities for more ‘real’ forms of self-representation in certain cases through artificial interventions, for example, when the voice of patients is ‘restored’ using deepfake techniques. This ‘renewed’ voice can actually be more authentic than the robotic-sounding voice of speech-generating devices otherwise used or the silence that would be imposed if we would shunt all artificial communication systems. Deepfakes thus unsettle established categories of deception and authenticity. The feeling of unease this causes should not be taken as a tell-tale sign that there is something inherently wrong with this technology but urge us to re-examine whether our gut feelings about what is fake and what is real match the world we live in today.

Save

Share

Post

Email

8 December 2021