Author: Cherry Wu, MS
November 5, 2024
Introduction
The rise of generative AI (GenAI) tools presents new challenges in combating technology-facilitated gender-based violence (TFGBV). As explored in Part I of this series, GenAI has made creating and disseminating harmful content easier, drastically lowering the barriers to its production. This surge in AI-generated abuse heightens the specific harms experienced by gender minorities, including women, non-binary individuals, and LGBTQI+ communities, deepening concerns around TFGBV. Such violence exerts a chilling effect, silencing these groups, stifling their participation in online spaces, and suppressing influential voices. The unchecked spread of TFGBV undermines gender equality efforts, emphasizing the need to address these issues at the intersection of AI and online abuse. This evolving landscape requires a targeted, intersectional regulatory approach that addresses the vulnerabilities of gender minorities. Without incorporating gendered perspectives, regulations risk creating blind spots, allowing harmful actors to exploit GenAI tools.
In Part II of this series, we will explore current governance approaches at both international and national levels aimed at addressing the intersection of GenAI and TFGBV. We will also highlight existing gaps and emerging strategies to combat these growing challenges.
Multilateral Efforts to Address TFGBV
International organizations play a crucial role in addressing TFGBV through collaborative efforts and global partnerships. One of the leading agencies in this space is the United Nations Population Fund (UNFPA), which has integrated the prevention of TFGBV into its broader agenda of promoting gender equality in line with the 2030 Sustainable Development Goals (SDGs). The UNFPA emphasizes a “Safety by Design” approach, advocating for the integration of safety features into digital tools from the outset to create a survivor-centered model. This model ensures that those most vulnerable to TFGBV are not only protected but also have their perspectives actively shaping the design, testing, and deployment of technology, making these tools more responsive to their needs. Additionally, the UNFPA promotes “Privacy by Default and Design,” calling for robust privacy standards to safeguard user data, particularly against cyberstalking and doxxing, where personal information can be weaponized. Together, these frameworks foster a comprehensive approach to safety and privacy in digital environments.
The UNFPA also supports legislative reforms that are trauma-informed and intersectional, ensuring that perpetrators of TFGBV are held accountable. A key example is the 2024 collaborative initiative between UNFPA and Canada, the “Making All Spaces Safe” program. With an investment of CAD $5 million (USD $3.6 million), this program focuses on providing legal and policy support, engaging communities to challenge harmful norms, and creating global hubs for survivor support. The initiative also emphasizes data collection to inform and scale future interventions, reinforcing the importance of evidence-based responses to combat TFGBV.
Beyond the efforts of UN agencies, the Global Partnership for Action on Gender-Based Online Harassment and Abuse was launched by the United States alongside Australia, Denmark, South Korea, Sweden, and the United Kingdom and has since expanded to include additional member countries. The partnership focuses on addressing the critical issue of TFGBV by developing shared principles that integrate gender-sensitive policies into national cybersecurity frameworks. It also emphasizes expanding targeted programs and resources to combat TFGBV and improving data collection in collaboration with organizations such as UN Women and the World Health Organization. Furthermore, the partnership has issued joint statements advocating for the rights of women journalists and activists in Iran and has actively contributed to global policy discussions in forums such as the G7.
Multilateral efforts, such as those led by UN agencies and the Global Partnership for Action on Gender-Based Online Harassment and Abuse, provide a strong foundation for addressing TFGBV at a global scale. However, national legal frameworks remain crucial in establishing clear legal responsibilities and ensuring accountability. The challenge becomes more complex with the rise of GenAI tools, which often make it difficult to trace harmful content back to its source, complicating enforcement and protection measures. The following sections will examine the governance approaches employed by the European Union (EU), the United States, the United Kingdom, and Australia, exploring both their strategies and the challenges they may face.
EU-Level Efforts to Address TFGBV
The EU has made significant strides in addressing TFGBV through a range of legislative efforts, with a particular focus on combating violence against women and online abuse. One of the key initiatives is the EU Directive on Combating Violence Against Women and Domestic Violence (“Directive”), which reached a political agreement between the European Parliament and the European Council in early 2024. This Directive is the first comprehensive legal framework at the EU level to combat forms of violence against women, including cyber-violence, such as non-consensual sharing of intimate images, cyberstalking, and cyberflashing. While the Directive prioritizes protections for women and girls, it does not include specific provisions for LGBTQI+ individuals, leaving room for potential gaps in addressing the full scope of gender-based violence.
An EU Directive is a legislative act that establishes a goal that all EU member states are required to achieve. However, it gives each country the freedom to determine how to implement the necessary measures through their own national laws, allowing individual governments to tailor their approaches while ensuring that the overall objectives set by the EU are met. In the case of the Directive on Combating Violence Against Women and Domestic Violence, member states must adopt these provisions into their legal systems to ensure harmonized protections across the EU, particularly when addressing online gender-based violence. |
The Directive harmonizes the criminalization of GBV across EU Member States, holding perpetrators accountable for both offline and online violence. By addressing the ways in which gendered violence manifests in the digital age, the EU aims to provide uniform protection. Beyond criminalization, the Directive prioritizes prevention by promoting digital literacy to help users identify and respond to online violence. It also ensures that victims have access to support systems, including safe reporting mechanisms and privacy protections during judicial processes.
Despite these efforts, concerns have emerged about the implementation and potential consequences of certain provisions. As pointed out by the Center of Democracy and Technology, some of the vague definitions in the Directive, such as “insulting” or “threatening” content, may lead to the unintentional criminalization of legitimate speech, including activist campaigns or protests. Moreover, while the Directive targets severe forms of cyber-violence, it may not fully address the harmful behaviors that fall below criminal thresholds but still undermine women’s participation in online spaces.
In addition to the Directive, the EU has introduced other regulatory frameworks that address aspects of TFGBV, including the Digital Services Act (DSA) and the AI Act. These frameworks aim to enhance transparency, accountability, and safety across digital platforms and AI systems.
The DSA focuses on regulating large online platforms, particularly those deemed “gatekeepers”, like social media and e-commerce platforms. Its primary objective is to ensure these platforms mitigate risks related to illegal content and harmful practices, including TFGBV. The DSA requires platforms to conduct risk assessments, comply with due diligence obligations, and ensure transparency in content moderation.
On the other hand, the AI Act targets AI developers and providers, especially those involved in creating high-risk AI systems. Its main goal is to ensure that AI systems are developed and used in ways that protect fundamental rights and prevent discriminatory outcomes, including those based on gender. Developers are required to conduct bias audits, risk management assessments, and impact evaluations to minimize risks, particularly for vulnerable groups. By regulating AI systems in critical sectors like employment and healthcare, the AI Act aims to prevent algorithmic discrimination that could exacerbate gender inequality.
Together, the EU Directive on Combating Violence Against Women and Domestic Violence, the DSA, and the AI Act underscore the EU’s commitment to regulatory innovation in addressing the evolving risks of TFGBV. The Directive lays a crucial foundation for protecting women and girls from both offline and online violence, while the DSA and AI Act focus on creating safer online environments and preventing AI-driven harm. However, the success of these regulations depends heavily on their consistent implementation and enforcement by individual member states. Without proper execution at the national level, these key measures may fall short of fully safeguarding vulnerable communities from TFGBV.
National Efforts to Address TFGBV: United States, United Kingdom, Australia
United States
In the United States, efforts to combat TFGBV center on boosting funding, enhancing law enforcement capabilities, and advancing legislative reforms. For FY 2024, the Department of Justice allocated over $40 million for new grant programs aimed at supporting trauma-informed law enforcement training and developing targeted strategies to address TFGBV. The allocation includes $2.8 million to establish the National Resource Center on Cybercrimes Against Individuals, which will provide crucial training, technical assistance, and resources to help organizations and communities tackle crimes such as cyberstalking and the non-consensual sharing of intimate images. In addition, $5.5 million has been allocated to support local law enforcement through the Cybercrimes Enforcement Program, enhancing their ability to prevent, prosecute, and train personnel on handling these crimes more effectively.
Further reinforcing these efforts, the U.S. National Plan to End Gender-Based Violence, launched in 2023, highlights online safety as one of its key strategic pillars. The National Plan builds upon the initiatives established by the White House Task Force to Address Online Harassment and Abuse. Initially focused on prevention, survivor support, and accountability, the Task Force’s efforts have now been fully integrated into the National Plan to create a coordinated government response to online harms. Together, these efforts promote collaboration between government agencies, tech companies, and civil society to ensure that digital platforms adopt trauma-informed and intersectional safety-by-design frameworks to shield users from online abuse.
On the global stage, the Department of State has issued a call for proposals to fund survivor-centered programs that combat TFGBV worldwide, recognizing the importance of protecting vulnerable groups online. The call for proposals aligns with broader U.S. policy goals of promoting gender equality and democracy through strategic global efforts.
United Kingdom
In the UK, the Online Safety Act, overseen by the Office of Communications (Ofcom), represents a step towards addressing TFGBV, with measures aimed at “understanding and addressing the experiences of women and girls online.” The Act requires social media platforms and search services to conduct risk assessments related to forms of abuse that disproportionately affect women, including intimate image abuse, cyberflashing, and coercive behaviors. Additionally, the Act promotes media literacy initiatives that seek to tackle online misogyny by “working with boys and men to support positive gender relations.” By addressing the underlying attitudes that contribute to online misogyny, these efforts aim to target the root causes of harmful online behavior. In 2025, Ofcom will further release guidance on managing online content and behaviors that affect women and girls. This guidance will draw on insights from survivors, civil society, and academic experts, ensuring a comprehensive approach to tackling harmful online practices.
Australia
Similarly, Australia’s Online Safety Act introduces several key measures to address TFGBV. One of the main features is the Adult Cyber Abuse Scheme, which allows individuals to file complaints about severe online abuse. This provision is crucial in providing a pathway for the removal of content that causes serious harm, a type of abuse that disproportionately affects women and girls. The updated Image-Based Abuse Scheme strengthens protections for individuals whose explicit or intimate images have been shared without consent, offering victims direct access to support from eSafety, the organization responsible for online safety in Australia. Additionally, the Online Content Scheme regulates illegal and offensive content, particularly materials promoting violence or sexual violence that are often used as a tool to control or intimidate women and girls online.
South Korea
While current South Korean laws focus on traditional sexual violence, the country has been slower to address technology-facilitated crimes. South Korea, known for its widespread technology use, has seen a surge in TFGBV, with 43% of women in Seoul reporting incidents in 2019. The rise of AI tools has worsened these crimes, notably with recent deepfake incidents where teenage students created explicit images targeting classmates and teachers, leading activists and politicians to label the crisis a “national emergency.” The rapid spread of deepfake pornography highlights the need for stronger regulations and enforcement.
South Korea has developed a range of legal and institutional frameworks to address TFGBV. For example, Sunflower Centers, established after a high-profile sexual assault case in 2004, offer victims counseling, medical care, and law enforcement support in one location, reducing trauma by streamlining services. The network of centers has expanded nationwide, becoming central to South Korea’s response to gender-based violence. In addition, specialized institutions like the Advocacy Center for Online Sexual Abuse Victims (ACOSAV) focus on digital abuse, providing content removal services and support for victims of cybercrimes like deepfakes and cyberstalking. ACOSAV works closely with law enforcement and legal bodies to ensure prompt responses to these crimes.
Moreover, the Korean National Police Agency (KNPA) plays a key role in offering specialized cybercrime training, particularly for cases involving women and minors. The KNPA also collaborates with international bodies, such as the UNDP, to enhance global efforts against TFGBV, sharing expertise through initiatives like the Police Capacity Building Support Programme.
Despite these initiatives, South Korea faces challenges in adapting its legal framework to the evolving nature of AI-driven abuses like deepfakes. Although stricter penalties and enhanced victim support have been introduced, more comprehensive solutions are needed to address GenAI-related abuses fully. Strengthening legal measures and enforcement will be essential for protecting vulnerable groups from TFGBV.
Lessons Learned and Steps Forward
The governance efforts in the United States, UK, Australia, and South Korea demonstrate a diverse and evolving approach to tackling TFGBV. Each country has developed strategies that reflect its unique legal and cultural contexts—whether through enhancing law enforcement, reforming legislation, or promoting education and prevention. South Korea, in particular, stands out for its institutional response to TFGBV with initiatives like Sunflower Centers and specialized digital abuse institutions such as ACOSAV. Despite these proactive measures, all of these frameworks still struggle to fully address the heightened risks posed by generative AI, which continues to complicate enforcement and protection efforts. The following section highlights four key considerations for enhancing governance in this evolving landscape.
1. Intersectionality is key
Intersectionality is key to understanding how different groups experience TFGBV. Gender minorities—including women, non-binary individuals, and LGBTQI+ communities—face compounded vulnerabilities from the overlapping effects of discrimination based on gender identity, race, sexuality, and class. Research consistently demonstrates that women with intersectional identities face higher rates of harassment and violence compared to those who identify as White, cisgender, and heterosexual. Amnesty International’s 2018 study revealed that Black women are 84% more likely than White women to be mentioned in abusive tweets. Additionally, 42% of girls and young women who identify as LGBTIQ+, 14% of those with disabilities, and 37% of those from ethnic minorities report facing online harassment tied to these characteristics. However, given the reliance on self-reporting, these numbers likely underestimate the true extent of abuse, as many victims underreport due to stigma or unawareness of available reporting channels.
To tackle these complexities, international organizations and governments must adopt intersectional strategies, in other words, recognizing how various identity factors magnify the impacts of TFGBV and tailoring responses to meet the specific needs of diverse groups. For instance, the aforementioned U.S. National Plan to End Gender-Based Violence emphasizes trauma-informed, survivor-centered strategies that include LGBTQI+ perspectives. However, gaps remain—non-binary and other gender minorities are frequently overlooked and understudied.
Addressing these gaps requires integrating more diverse perspectives into policy and enforcement. Intersectional approaches must inform both prevention efforts and support systems, ensuring policies consider how different identities shape individual experiences of TFGBV. Without these actions, responses to TFGBV risk being incomplete, failing to address the most vulnerable and marginalized groups effectively.
2. More recognition of GenAI’s challenges is needed
As explored in Part I of this series, GenAI has dramatically lowered the barriers to creating and disseminating harmful content, including deepfakes, with unprecedented ease. These tools reduce the cost of content production while anonymizing perpetrators, making it more difficult to detect, regulate, and prosecute abuses. The recent incidents in South Korea involving the use of deepfakes to target women highlight the urgent need for stronger, more integrated approaches to addressing the dangers posed by GenAI.
While legislative efforts like the EU AI Act aim to mitigate algorithmic discrimination and improve accountability in AI systems, they often treat technology and gender-based violence as separate issues. This fragmented approach overlooks how GenAI intensifies the risks of TFGBV by enabling new forms of abuse and complicating enforcement. Governance frameworks must recognize GenAI’s role in facilitating non-consensual image manipulation and creating harmful content at an unprecedented scale and speed. Without a holistic approach that views GenAI as an evolving vector of attack, regulatory efforts will fall short. Addressing the intersection of GenAI and TFGBV is crucial for developing comprehensive solutions that consider both the technical and social dimensions of this issue.
3. Data collection and transparency are critical for evidence-based interventions
Currently, global data collection efforts on TFGBV are fragmented, with no unified framework for measuring the prevalence, forms, and impacts of technology-facilitated violence. As a result, it is challenging to compare data across regions or assess the true scope of the issue. While civil society organizations, international bodies, and research institutions have generated valuable insights, these studies often leave significant gaps in understanding the experiences of women and marginalized groups in the Global South. Furthermore, tech companies provide little transparency on the data they collect regarding gender-based violence on their platforms, which is critical for assessing the effectiveness of reporting mechanisms and responses to abuse. Without coordinated data that also accounts for intersectional vulnerabilities, such as race, sexual orientation, and disability, evidence-based policies and interventions will struggle to address the evolving nature of TFGBV fully.
4. Industry efforts must be integrated
There is increasing recognition of the need for stronger industry cooperation to tackle TFGBV in the age of GenAI. As governments work to establish regulatory frameworks, digital platforms and AI developers are critical in mitigating the spread of harmful, AI-generated content. While some platforms have made strides in content moderation, the rise of GenAI has dramatically lowered the barriers for creating deepfakes, non-consensual explicit content, and other forms of TFGBV, further complicating content governance. Platforms and AI developers must develop more robust governance frameworks that are transparent, include expert input, and be specifically designed to address the unique challenges of AI-generated abuse. Until the tech industry fully acknowledges and takes responsibility for the harms enabled by GenAI on their platforms, regulatory efforts will be insufficient to curb the growing threat of TFGBV.
As these gaps illustrate, considerable work remains to address TFGBV, especially given the rapid advancements in AI technology. While governments and international bodies have made progress in establishing legal frameworks, the private sector—particularly tech companies and AI developers—must take a proactive role in addressing how GenAI worsens TFGBV. These companies shape the digital landscape, and their efforts to mitigate the harm caused by GenAI are crucial. In the third part of this series, we will examine how the industry is tackling this evolving challenge and explore the initiatives that tech companies, platforms, and AI developers are implementing to address the growing risks at the intersection of GenAI and TFGBV.
Glossary
Cisgender: An individual whose gender identity aligns with the sex they were assigned at birth.
Coercive: Using force or threats to compel someone to act against their will.
Cyberflashing: The non-consensual sending of explicit images to another person, often via digital means, intended to harass or intimidate.
Cyber Harassment: The use of digital platforms and tools to repeatedly send harmful, threatening, or abusive messages aimed at intimidating or harming individuals, often targeting specific characteristics such as gender or race.
Cyberstalking: The use of technology, such as the internet or social media, to repeatedly harass, threaten, or monitor someone, typically with malicious intent.
Deepfakes: A form of synthetic media in which the faces or voices of individuals in images, videos, or audio are swapped or altered to create realistic but fabricated content. This technology is often used for malicious purposes, such as spreading false information or creating non-consensual pornography.
Deep-learning Technologies: A subset of machine learning that involves neural networks with many layers (hence "deep"). These technologies are used to model complex patterns in large amounts of data and are key to advancements in areas like speech recognition, image processing, and AI content generation.
Digital Literacy: The ability to effectively and critically navigate, evaluate, and create information using a range of digital technologies.
Disseminate: To spread or circulate information or content widely, particularly through digital means.
Disproportionate: Unequal or out of balance, often used to describe how certain groups may be more affected by certain harms or policies than others.
Due Diligence: The effort made by an organization or individual to verify information, assess risks, and fulfill legal and ethical obligations.
Fragmented: Describing something that is broken or separated into parts, often lacking cohesion or consistency.
G7: The Group of Seven, an international economic and political organization consisting of seven major developed nations: Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States.
Generative AI (GenAI): GenAI represents a family of deep-learning technologies capable of creating multimodal media content.Holistic: Considering the whole of something rather than just its parts, often used in approaches that address multiple interconnected aspects.
Intersectional: An approach that considers overlapping identities and the ways in which they may compound experiences of discrimination or privilege.
Judicial: Relating to the courts or the administration of justice.
Legislative: Pertaining to laws and the process of making or enacting them.
Marginalized: Referring to individuals or groups pushed to the edges of society, often denied access to resources, rights, and opportunities.
Misogyny: Hatred, dislike, or prejudice against women.
Mitigate: To reduce the severity, seriousness, or harmful impact of something.
Multilateral: Involving multiple countries or parties, typically in the context of agreements, cooperation, or governance.
Non-Binary Individuals: People who do not exclusively identify as male or female; their gender identity may fall outside the traditional categories of "man" and "woman."
Perpetrators: Individuals or groups who commit harmful, illegal, or violent acts.
Sunflower Centers: A network of centers in South Korea established to provide comprehensive support, including counseling, medical care, and legal assistance, to victims of gender-based violence.
Technology-Facilitated Gender-Based Violence (TFGBV): Acts of gender-based violence that are enabled or amplified by technology, including harassment, stalking, and exploitation online.
Vector: A path or means through which harm or influence can be delivered or spread, often used in reference to digital or technological platforms.