top of page
  • Human Rights Research Center

The Impact of Generative AI on Technology-Facilitated Gender-Based Violence: Part I

Author: Cherry Wu, MS

September 24, 2024



Introduction


Generative AI (GenAI) represents a family of deep-learning technologies capable of creating multimodal media content. With recent advancements, these tools have become increasingly capable of producing highly realistic content at a lower cost and with greater accessibility. While GenAI holds significant potential for enhancing productivity and creativity, it also poses substantial risks, particularly in the creation of harmful content, whether intentional or unintentional. Among the most concerning issues is the amplification of technology-facilitated gender-based violence (TFGBV) through these tools. This two-part series will first explore the current landscape of GenAI—what it is, how it functions, and how it impacts TFGBV. The second part will examine the legislative landscape and the key efforts addressing these issues, as well as the gaps that remain.


What is GenAI?


Generative AI (GenAI) is a type of artificial intelligence designed to create new content by learning from vast amounts of existing data. It leverages complex models such as neural networks, particularly transformers, to understand patterns within data and generate content in various forms, including text, images, audio, and video. The broad capabilities of GenAI enable it to perform tasks that range from simple text generation to creating sophisticated multimedia content.

 

Type of Content

Explanation

Text Inputs / Outputs

GenAI models, like OpenAI’s ChatGPT, can generate a wide range of text-based outputs, from simple answers to questions to complex articles, stories, or summaries. These models can perform tasks like summarization, translation, and creative writing based on the input provided by users.

Image Generation

Models like OpenAI’s DALL-E can create anything from simple illustrations to highly detailed and lifelike images based on the descriptions provided by users.

Voice Generation

These models convert text into natural-sounding speech. This technology is used in digital assistants, automated customer service systems, and more.

Video Generation

These models, like OpenAI’s Sora, can create video content from text descriptions. While this technology is still in its early stages, advancements are rapidly increasing its ability to produce realistic videos. These models use complex algorithms to generate video sequences that align with the input text.

 

The open-source nature and widespread availability of many GenAI tools have significantly contributed to their rapid adoption across various fields. Open-source platforms allow developers and users to access, modify, and deploy GenAI models with relative ease, lowering the barrier to entry for utilizing this advanced technology. This democratization has expanded the use of GenAI beyond specialized AI research labs, enabling individuals and organizations with minimal technical expertise to leverage these tools for diverse applications. However, while this accessibility fosters innovation and broadens the reach of AI technologies, it also raises concerns about the potential for misuse, particularly against vulnerable groups, highlighting the importance of understanding and addressing the risks associated with Technology-Facilitated Gender-Based Violence.


What is Technology-Facilitated Gender-Based Violence?


Technology-Facilitated Gender-Based Violence (TFGBV) refers to “any act that is committed, assisted, aggravated or amplified by the use of information communication technologies or other digital tools which results in or is likely to result in physical, sexual, psychological, social, political, or economic harm or other infringements of rights and freedoms.” TFGBV affects a significant number of women, particularly those active in public spaces, such as journalists, activists, and politicians. For instance, 73% of women journalists have reported experiencing online violence, and 42% of women in politics have been subjected to image-based abuse or defamation. Additionally, 58% of young women aged 15–25 – nearly 3 out of 5 women – reported experiencing online harassment, illustrating how widespread and damaging this issue has become across different sectors and age groups.


How Does GenAI Impact TFGBV?

Advancements in GenAI create new ways for violence to arise through TFGBV. Among the different possible vectors, this article highlights three dimensions of concern.


1. Enhanced Text-Based Harassment


GenAI models are capable of producing both short-form and long-form texts that are highly realistic and indistinguishable from human-generated content. This capability allows for the creation of sophisticated, targeted harassment with unprecedented efficiency and scale. The accessibility of these tools means that harmful content can be generated and disseminated rapidly, exacerbating the problem of online harassment, particularly against women. Among the various forms of TFGBV, incidents such as misinformation, defamation, cyber harassment, and hate speech are the most frequently reported. These issues are likely to be further amplified by the advanced text creation capabilities of GenAI, which can quickly flood digital spaces with malicious content.


2. Proliferation of Better Deepfakes


While GenAI encompasses a broad range of capabilities for creating content based on user input, deepfakes represent a more specific application that involves manipulating visual and audio elements—particularly faces and voices—in existing media. The deceptive nature of deepfakes makes them a powerful, yet potentially dangerous, tool when used for malicious purposes such as spreading misinformation, producing non-consensual pornography, or engaging in political propaganda.

 

A core component of GenAI is Generative Adversarial Networks (GANs), the primary technology driving the creation of deepfakes. As GenAI has rapidly advanced, so too has the sophistication of GANs, significantly enhancing their ability to produce highly realistic and convincing deepfakes that can easily deceive viewers into believing the altered content is real.

 

This increasing sophistication, coupled with the development of user-friendly tools, has led to a significant proliferation of deepfake content. In 2023, there were 95,820 deepfake videos online, representing an astounding 550% increase since 2019. Alarmingly, 99% of individuals targeted in deepfake pornography are women. Research from 2023 identified at least 42 user-friendly platforms dedicated to deepfake creation, collectively attracting around 10 million monthly searches. The lowered barrier to entry means that even individuals with minimal technical expertise can now create convincing deepfakes. In fact, it now takes less than 25 minutes and costs nothing to produce a 60-second deepfake pornographic video using just one clear image of a person’s face.

 

The ease with which deepfakes can be created and disseminated, particularly through platforms that require minimal technical expertise, significantly amplifies the risks to women and vulnerable groups. The ability to create realistic, damaging content in a matter of minutes, using just a single image, not only exacerbates existing forms of online harassment but also introduces new, more insidious threats. This technological shift makes it increasingly difficult for victims to protect themselves and for authorities to regulate and prevent such abuses, underscoring the urgent need for comprehensive strategies to address the growing challenge of TFGBV in the digital age.


3. Emerging Capabilities: Compositional Deepfakes


Unlike traditional deepfakes, which typically involve altering or creating a single piece of media, compositional deepfakes combine multiple fabricated media sources—such as audio, video, text, and images—that, while seemingly unrelated, corroborate each other to create a convincing and coherent synthetic history.

 

For instance, a compositional deepfake attack might involve generating a series of fake news articles, videos, and images that together construct a false narrative about an event or individual, leading to widespread panic or significant reputational damage. While no large-scale compositional deepfake attack has been documented yet, the potential for such coordinated disinformation campaigns is a growing concern. In the context of TFGBV, these capabilities pose a particularly dangerous threat to women journalists and public figures, who are especially vulnerable due to the abundance of publicly available images and information about them. The publicity and visibility of such smear campaigns can amplify the damage, making it difficult for these women to refute the fabricated narratives effectively. These sophisticated attacks not only harm their reputations but can also threaten their safety, relationships, and professional opportunities.


Conclusion


The rapid advancement and accessibility of GenAI have expanded the potential for TFGBV to manifest in increasingly harmful and pervasive ways. The sophistication of GenAI tools, particularly in creating realistic deepfakes and compositional media, poses significant challenges for victims, who often have limited avenues for recourse or support. As the technology continues to evolve, the risk of widespread misuse and the ease with which harmful content can be produced and disseminated demand urgent attention. The next part of this series will explore the legislative landscape and the ongoing efforts to combat these emerging threats, identifying both the progress made and the gaps that still need to be addressed to effectively mitigate the risks posed by GenAI.


 

Glossary


  • Cyber Harassment: The use of digital platforms and tools to repeatedly send harmful, threatening, or abusive messages aimed at intimidating or harming individuals, often targeting specific characteristics such as gender or race.

  • Defamation: The act of damaging someone's reputation by making false statements, which can take place online through social media, blogs, or other digital communication platforms.

  • Deepfakes: A form of synthetic media in which the faces or voices of individuals in images, videos, or audio are swapped or altered to create realistic but fabricated content. This technology is often used for malicious purposes, such as spreading false information or creating non-consensual pornography

  • Deep-learning Technologies: A subset of machine learning that involves neural networks with many layers (hence "deep"). These technologies are used to model complex patterns in large amounts of data and are key to advancements in areas like speech recognition, image processing, and AI content generation.

  • Democratization: The process by which technology becomes widely available and accessible to the general public, lowering the barriers for entry. In the context of AI, democratization refers to how tools like generative AI and deepfake platforms are now accessible to individuals without specialized technical expertise.

  • Disinformation: The deliberate spread of false or misleading information intended to deceive, often used as a tool to manipulate public opinion or discredit individuals.

  • Generative Adversarial Networks (GANs): A type of machine learning framework used in generative AI and deepfake creation. GANs consist of two neural networks—a generator that creates new data and a discriminator that evaluates its authenticity. This adversarial process enables GANs to generate increasingly realistic synthetic content.

  • Malicious Content: Digital material created with the intent to harm, deceive, or manipulate individuals or groups. This includes deepfakes, misinformation, defamation, and cyber harassment.

  • Misinformation: False or inaccurate information that is spread without the intent to deceive. It can still cause significant harm, particularly when it spreads widely on digital platforms. 

  • Multimodal: Refers to AI models that can process and generate multiple types of data, such as text, images, audio, and video. These models enable more comprehensive and realistic content creation.

  • Neural Networks: The foundational architecture of artificial intelligence models. Neural networks mimic the way the human brain processes information, enabling AI models to learn from and make decisions based on data. This structure allows AI systems to recognize patterns, make predictions, and improve over time through training. 

  • Open-source: Refers to software or technologies that are freely available for anyone to use, modify, and distribute. In the context of AI, open-source tools like GPT-3 or Stable Diffusion enable wide access to powerful models, contributing to both innovation and potential misuse.

  • Smear Campaigns: Coordinated efforts to damage an individual’s reputation by spreading false or misleading information. In the digital age, smear campaigns can be amplified using social media, misinformation, and deepfakes.

  • Synthetic History: The creation of a false narrative or historical record by combining multiple pieces of fabricated media (such as video, audio, and text) in a way that makes the story appear legitimate and cohesive. This tactic can be used in disinformation campaigns to manipulate public perception.

  • Transformers: A type of neural network architecture that has transformed Natural Language Processing (NLP). Transformers enable AI models to process long sequences of text and generate coherent, contextually relevant responses, underpinning the conversational capabilities of large language models like GPT-3 and GPT-4.

Comments


bottom of page