The following essay is a transcript from a presentation on the harmful effects of deepfake technology:
What are deepfakes?
Hello everyone, my name is Manuel Cortez and this is my presentation for DTSC-690. Today we’re going to talk about deepfakes, a term that has been making waves across the digital landscape. But what exactly are deepfakes? Well at its core, a deepfake is a form of synthetic media. An example of which are artificial images generated not with traditional means like Photoshop or video editing, but rather through machine learning techniques like deep learning (What the Heck Is a Deepfake? | Information Security at UVA, U.Va., n.d.).
And there are a number of machine learning techniques (that we’ll quickly get into later) that have the ability to generate these realistic images, videos, and audio recordings. These forms of media are imaginative, but they have ignited intense debates about truth and privacy in an increasingly digitized society (Smith, 2022).
Some might argue that deepfakes represent a convergence of cutting-edge technology and creative expression, but it is quite apparent that they come along with profound ethical implications. Throughout this presentation, we’ll explore those implications and particularly focus on their impact on the broader society and it’s marginalized groups.
A quick example that I am about to show on the next slide, was one that a team over at M.I.T. created of Nixon. You can see how believable the video and audio are, and once you realize it becomes all the more daunting.
How do they work?
So how do deepfakes actually work? I mentioned the concept of deep learning, but what even is deep learning? Deep learning is a subset of machine learning and a method used in AI that can recognize complex patterns in images, sounds, and other data. The building block of deep learning are neural networks, something we might all be familiar with. These networks are trained on a large amount of data to learn patterns and correlations, allowing them to generate realistic images, videos, or audio that appear authentic (What Is Deepfake and How to Use It, n.d.).
To quickly go deeper, there are various techniques within this subset that are used in creating deepfakes. These include autoencoders, generative adversarial networks (GANs), and first-order motion models. Autoencoders are a type of neural network architecture used for unsupervised learning that consist of an encoder network and a decoder network. Generative Adversarial Networks (GANs) are a machine learning framework that consists of two neural networks which are a generator and a discriminator. Additional techniques like First-Order Motion Models are also used. They are a class of techniques used to transfer motion between different objects or scenes in videos. These tools are used to generate realistic sequences of audio/video by transferring the motion from a source object or scene to a target object or scene (What Is Deepfake and How to Use It, n.d.). The result is a convincing picture of a fabricated reality that can be difficult to distinguish.
Why does it matter?
So why does learning about deepfakes matter? What’s the sociocultural significance of deepfakes? Firstly, deepfakes are increasing in prevalence and making a large impact across the digital landscape (e.g. social media). The proliferation of social media has followed the proliferation of technology and AI. And with the help of chatbots like chatgpt, AI is quickly becoming a household term. As such deepfakes have been on the rise by 900% within the last couple of years as Letzing pointed (Letzing, 2021). What makes this worrisome is that we live in a society that struggles with media literacy, only 4 out of 10 adults have reported to have had media literacy education in HS (Amanda Marsden, 2022). We have already seen thus far the dilemmas that could arise with this sort of image/audio manipulation, looking back to the Nixon video I showed. Because of that they pose a broader threat to the integrity of information. They have the capability to spread misinformation and undermine trust in our institutions.
However, that’s not where it ends. Of particular concern is the rise of deepfake pornography (Conger & Yoon, 2024). In this context, this means that the faces of individuals in porn are replaced with the faces of other people. This is often celebrities or individuals who have not consented to being featured in porn. Quite clearly, this a breach of individuals’ privacy, all of this is something we’ll get into deeper. In fact, one of the biggest celebrity names recently found themselves in this dilemma and X (formerly known as Twitter) curtailed it by making her name unsearchable in the site. Celebrities have power and resources to help themselves, other women don’t.
This threat erodes society’s ability to discern truth from fiction and as you can see hurts vulnerable communities like women. It worsens existing power imbalances and contributes to their marginalization.
What makes it ethically problematic?
Let’s dive deeper into this ethical dilemma, one can already note how deepfakes present a myriad of ethical concerns. It’s clear that they don’t only represent our movement toward more technological advancements.
One pressing issue is the erosion of trust in media, one of our societal pillars. The ability to manipulate videos and audio raises a big threat, making it increasingly challenging to discern genuine information from false content. The example given earlier of Nixon saying something he never actually said illustrates this dilemma. Regardless of what we think of the media currently (fake news or not), this most certainly further undermines the foundations of trust upon which societies rely on for information. The potential for deepfakes to deceive individuals and propagate false narratives is a significant concern for everyone involved across all political spectrums. Whether it’s spreading misinformation, fabricating evidence, or impersonating public figures, deepfakes have the power to manipulate. That manipulation undermines the integrity of information and poses a threat to the stability of democratic institutions. As such, the ethical philosopher Dan Fallis now argues that the mere existence of deepfakes now undermines the truth value of videos as whole (Fallis, 2021). Or to put it simply, videos are losing their credibility.
In addition and with regards to deepfake pornography, the unauthorized use of individuals’ persons raises serious privacy concerns. Deepfakes infringe upon individuals’ rights to control their own representation by appropriating their image/voice. This is done without their consent. This not only leads to harm but also robs individuals of their agency in shaping their image. This gives potential for deepfakes to be used for harassment, coercion, and exploitation. It worsens the ethical dilemmas surrounding this technology. From revenge porn to cyberbullying, the malicious use of deepfakes can inflict harm on victims, increasing the already apparent harm already found in society.
Unique harm on women
Let’s narrow in on one of those communities who are consistently harmed by deepfakes. In a general sense, it’s not a secret that women have consistently been victims of loss of autonomy and privacy world-wide [(United Nations Population Fund, 2021)]. I’m going to read some of the stats found from the UNFPA report because it’s important to illustrate that deepfakes aren’t creating an entirely new problem, but they’re accentuating it.
In addition, this unfortunate trend is accentuated by technology, not exclusive to deepfakes. In essence, women are disproportionately harmed by technology as a whole. [READ OFF SLIDES] The United Nations women defines Technology-facilitated gender-based violence as “ any act that is committed or amplified using digital tools or technologies causing physical, sexual, psychological, social, political, or economic harm to women and girls because of their gender.” (Tech-Facilitated Gender-Based Violence, n.d.)].
But how is all this relevant to deepfakes? Well the point I’m trying to illustrate is that tech in and of itself has never been an innocuous enterprise. There are underlying issues that manifest in hidden ways through its usage. In other words, technology isn’t exempt from the general harm we discussed earlier; it only serves as a tool to further it. The more overt ways these harms manifest themselves are specifically in the realm of deepfakes.
One of the most concerning manifestations is the rise of non-consensual pornography, where women’s faces are imposed onto explicit imagery or videos without their consent. Deepfake technology is actually underpinned by this violence. The term deepfake was actually coined from pornography, where a reddit user named ‘deepfake’ uploaded a pornographic deepfake in 2017 (What Is Deepfake and How to Use It, n.d.). As mentioned earlier, this violation of privacy and autonomy strips women of agency over their own bodies and images. Moreover, deepfake pornography perpetuates harmful gender stereotypes and narratives. This reinforces societal abuse and contributes to that objectification. It increases existing power imbalances, enabling perpetrators to manipulate, control, and silence victims. As deepfakes become increasingly sophisticated and accessible, the risks to women’s safety/privacy/well-being escalate. Urgent action is necessary to address the root cause of this harm and protect women’s rights in the digital age.
Arguments for deepfakes
Many would argue that despite their potential for harm, they also have beneficial applications. And in some ways they are right. Deepfake technology can be a tool for creative expression, enabling artists to push their limits for storytelling and visual effects. By leveraging this technology to manipulate media, creators can unlock new ways to express themselves. From digitally resurrecting deceased individuals, to reimagining historical events, deepfakes offer many opportunities for our imagination. An example of this would be the recent song put out by the beatles, Now and Then (Benchetrit, 2023). While not an entirely fabricated script, they essentially trained AI models on John Lennon’s voice and recordings for a cleaner vocal.
In the educational realm, deepfake technology holds immense promise for enhancing learning experiences and expanding access to resources. Educators can now create immersive learning environments that cater to diverse learning styles. Educators, for example, are using apps like “MyHeritage” to do exactly that. They are using deepfakes to bring alive historical figures, this can foster a deeper connection with students (Ofgang, 2022).
In addition, barriers in language have long been an issue to global communication/collaboration. However, deepfake technology offers a potential solution to this. By using the tech, translation tools can accurately transcribe and translate spoken languages in real-time. A start-up AI company named Flawless, is doing just that (Vincent, 2021). This facilitation of communication improves our understanding and collaboration among different communities which is a net positive.
In the realm of scientific research, there is potential to revolutionize data analysis and interpretation. Professor Rees, from UCL, writes that “AI can and must be used for good, to complement and augment human endeavour rather than replace it.” He describes that “by generating realistic simulations/data, scientists can gain deeper insights into complex phenomena and test hypotheses more efficiently”. Additionally, this creation of data points offers more opportunities for advancing our understanding of the world and moving scientific progress forward (UCL, 2019).
Possible solutions
So in essence there are always going to be potential goods, potential bads for any ethical dilemma. What is necessary are solutions to curtail those potential bads. Do we have any? What do they look like and what can we come up with?
As of now, laws are the primary tool for addressing these issues. However, existing laws are inadequate and often ill-equipped to deal with deepfakes. In fact, laws in and of themselves tend to do a poor job keeping up with technology. Most deep-fake related legal solutions primarily focus on areas such as tort claims, revenge porn, and intellectual property rights, but they fall short in effectively laying punishment to perpetrators as Micah Kindred from the University of Cinncinatti, College of Law states (Kindred, 2023). The legal framework surrounding deepfake technology is fragmented and insufficient. Revenge porn laws are not consistent across states with many even failing to come into fruition. As of 2022, 48 states have sought to enact such laws, but many fall short and fail to pass (Kelleher, 2023). Intellectual property laws fall short and require copyright, harassment/tort claims require a large burden of proof that is already hard to fulfill even in the physical landscape, as the lawyer Kara Kelleher states (Kelleher, 2023). As one can note, this leaves victims without any true solution.
Within the realm of technology, advances in artificial intelligence and machine learning offer potential avenues. The same AI-based systems can be used to leverage algorithms to identify inconsistencies/anomalies in digital media. These systems analyze facial expressions, voice patterns, and other indicators to distinguish between genuine and manipulated content. While still evolving, AI-driven detection mechanisms represent a critical defense against the evergrowing problem of deepfakes. For example, the World Economic forum recommends using blockchain as a mode to authenticate images, however they themselves also admit it’s not enough (Cheikosman et al., 2021).
As such, the threat of deepfakes requires collaborative efforts from government, tech, and civil (us) entities. Initiatives focusing on policy, innovation, and public awareness are essential for combating deepfakes. By fostering collaboration and sharing resources, we can create a more resilient defense against the harmful effects of deepfake!
Conclusion
In conclusion, it should now be clear that the rise of deepfakes pose a significant threat to our society. They can be used to endanger not only individuals, but also the fabric of truth our communities rely on. In recognizing these dangers, we also need to acknowledge our moral obligation to take action.
We must recognize the moral responsibility at play here. As the creators/consumers of technology, we have a duty to use it ethically, as well as responsibly.
We cannot overlook the impact that deepfakes and technology as a whole has on marginalized communities. From the spread of misinformation to gender-based violence, these technologies have the potential to worsen existing injustices. We have a moral imperative to enact laws/regulations that safeguard the rights of those most vulnerable to harm like women. This includes protections against harassment and other forms of online abuse. As well as holding perpetrators accountable for their actions.
We must recognize that technological solutions are an important part of addressing the issue that deepfakes pose. While it may be impossible to completely eradicate improper usage, we can and must develop tools that help detect misuse. This includes investing in research that is aimed at improving the authenticity of digital. As well as empowering individuals with knowledge and skills to combat the threat that deepfakes pose.
In conclusion, science and technology are not innocuous. That’s true in all fields as much as it is true within data science. By acknowledging our moral obligation to use technology appropriately, to enact law, and to create technology that creates safeguards, we can curtail the harm caused by these deepfakes and move science forward ethically.
References
If you are interested in reading more, or seeing examples of deepfakes/online abuse, here are my references. Thank you.
Available in PowerPoint PDF