The new frontline for women and truth in the age of AI
From deepfakes to digital abuse, technology is transforming how women are targeted—and how truth is challenged

By Arbana Xharra, Fatou Baldeh, Kat Fotovat and Varinder Kaur Gambhir
Women who enter public life, including journalists, activists and those in politics, have always understood that visibility comes at a cost.
But in the digital age, first shaped by social media and now increasingly driven by artificial intelligence, that cost has deepened, changed form, and become systemic and often relentless.
Harassment now travels in coordinated waves, often beyond control, moving faster than truth and leaving behind consequences that do not easily fade. And the danger is not limited to these attacks – it is also found in the silence they are designed to produce from their targets. Artificial intelligence is now making easier to manufacture, scale, and sustain that silence.
Examining the impact of emerging technologies
The urgency of these shifts was at the centre of discussions during a side event at the UN’s Commission on the Status of Women (CSW70), Women Holding the Line: Storytelling & Safety in an Age of AI, convened in partnership with BBC Media Action and Peace Pays. Around 60 leaders from across the globe, journalists, activists, technologists, policymakers, and funders, gathered to examine how emerging technologies are reshaping both risk and resilience for women in public life.
In a panel discussion led by BBC reporter Samira Hussain, Fatou Baldeh, an FGM activist from Gambia; Kat Fotovat, a former ambassador and Co-Founder at Peace Pays.ai; Arbana Xharra, a Kosovar Albanian investigative journalist; and Varinder Kaur Gambhir, Country Director at BBC Media Action, India emphasized how the threats facing women in public spheres are no longer isolated or temporary. They are embedded within rapidly evolving technological systems which increase surveillance, track and invade privacy, and may also put family members and sources at risk.
AI offers powerful benefits, including the ability to analyse vast amounts of data, detect patterns, and support investigative reporting. But it is also being used as a tool to generate manipulated images, fabricate audio, and highly convincing deepfake videos – all of which are lowering the cost of disinformation. The World Economic Forum has identified this AI-driven disinformation as a growing global risk, with serious implications for trust and the integrity of information ecosystems.
Efforts to discredit, intimidate, and silence women journalists, human rights activists, and political opponents now operate at speed, at scale, and often beyond the capacity of existing systems to respond.
Artificial intelligence is not only creating new harms, it is amplifying existing ones. Digital tools are reinforcing patterns of control, while deepfakes, disinformation, and algorithmic bias are making it increasingly difficult to trust what we see, hear, and verify.
Escalating risks in a digital age
Panellists described their personal experiences of how online abuse targeting women has evolved into more complex and coordinated forms.
“Not so long ago, you couldn’t believe what you read. Now you can’t believe what you see,” one speaker said, reflecting a growing concern about the erosion of trust in digital content.
Journalist Xharra described a fabricated video that falsely linked her family to political narratives, reaching hundreds of thousands of viewers before she was able to respond. Kat Fotovat discussed being doxxed during her previous role as Acting Ambassador-at-Large for the US Office of Global Women’s Issues, and the onslaught of thousands of attacks which were able to target very personal information, her physical location, and family members.
She said: “I was someone who had taught women all over the world how to protect themselves, however the scale and speed of AI was something I had no idea about till I experienced it. The lessons learned from that were the inspirations for much of my work today, to teach and train women around the world to use AI for good."
Old inequalities, new impact
These harms do not emerge in a vacuum. The misogyny women face online is often rooted in the same regressive gender norms they confront offline, but digital platforms mirror, magnify, and monetise these biases at scale. Technology is therefore not just enabling new forms of abuse. It is accelerating the spread and impact of existing inequalities in new ways.
Research supports these experiences. According to UNESCO, nearly three-quarters of women journalists worldwide have experienced online violence. A global study also found that one in five have faced offline attacks linked to digital abuse, highlighting the growing connection between online harassment and physical risk.
At the same time, advances in artificial intelligence are accelerating the speed and scale at which disinformation spreads. Manipulated content can now be produced and distributed widely before verification mechanisms can respond.
This contributes to a broader challenge. Surveys suggest that a significant proportion of people globally struggle to distinguish between real and false information online, raising concerns about declining trust in media and public discourse.
Several speakers noted that this environment can benefit authoritarian actors. As uncertainty increases, it becomes easier to discredit journalists, dismiss evidence, and weaken accountability.
Strong networks, bringing together journalists, activists, technologists, funders, and policymakers, are among the most effective ways to respond to coordinated online AI harm.
Opportunities - and what comes next
But there is a clear gap. Many promising solutions, from AI-powered fact-checking tools to technologies designed to protect survivors, remain fragmented and under-resourced. The issue is not a lack of ideas, but a lack of coordination, support, and funding to scale them.
Several priorities are clear. Sustained investment is needed to move promising solutions beyond pilot phases. Flexible funding, especially at critical transition stages, is essential to help these efforts grow and have real impact.
Stronger infrastructure, both technical and institutional, will enable faster responses to disinformation and to better protect journalists facing digital threats. The provision of training in AI safety, protection and promotion could help create spaces and opportunities to use AI as a exponentially helpful tool rather than a weapon against women online.
Those who work to support trustworthy, effective media and storytelling today must now also address digital safety, AI literacy, and platform accountability. Strengthening women’s ability to navigate and respond to AI-driven harms, while holding platforms accountable for the amplification of such harms, is no longer optional—it is central to ensuring that women can participate safely and meaningfully in the digital public sphere.
There is also a clear need for women in leadership, particularly in the design and governance of AI and technological systems. Without this, existing inequalities and biases will be built into the next generation of technologies without challenge or correction. If women are pushed out of the digital public sphere, they are pushed out of the future.
Not a single-sector challenge
No single sector can address these challenges alone. The scale and speed of technological change, combined with the growing sophistication of digital threats, require a coordinated response.
Journalists, civil society organisations, governments, technology companies, policymakers, and funders each hold part of the solution, but only through collaboration can meaningful progress be achieved. Fragmented efforts are no longer enough in an environment where harm spreads rapidly and across borders.
A more unified approach is essential, one that brings together expertise, resources, and accountability across sectors. Technology must be shaped by those who understand its risks, policy must respond to its consequences, and civil society must remain central in defending those most affected.
Protecting truth, and those who speak it, is not the responsibility of one group alone; it is a shared obligation that demands collective action.
Arbana Xharra is a Kosovar Albanian investigative journalist. Fatou Baldeh is an FGM activist from Gambia. Kat Fotovat is a former ambassador and Co-Founder at Peace Pays.ai. Varinder Kaur Gambhir is Country Director at BBC Media Action, India. All spoke as part of BBC Media Action special event, hosted by BBC Studios, at the CSW annual meetings in New York in March 2026.
BBC Media Action will dive deeply into these themes in a side event at the UK Foreign, Commonwealth and Development Office's Global Partnerships Conference in London on Tuesday 19 May: Imagining new futures: harnessing the power of technology to prevent violence against women and girls / gender-based violence (GBV)

Search by Tag:
- Tagged with InsightInsight
- Tagged with Women and girlsWomen and girls
- Tagged with Information disorderInformation disorder
- Tagged with EventsEvents