top of page
Post: Blog2_Post

The Algorithm Was Never Neutral: AI Violence against Women

Updated: May 3


Person using a smartphone in a dark room, screen glowing brightly with images visible. Background is blue, creating a relaxed mood.

Consider what it means to wake up and find your face on a body you never inhabited, in images you never made, circulating among people you will never be able to reach. Consider that you have done nothing wrong, that the only material anyone needed was a photograph you posted publicly, and that by the time you discover what has happened, the damage is already compounding. This is not a thought experiment. For tens of thousands of women every year, it is Tuesday morning.



The Four Faces of AI Violence Against Women


AI does not malfunction when it harms women. It performs exactly as built, on data that was never neutral to begin with.


We live in a moment of breathless optimism about artificial intelligence: its potential to accelerate medicine, close educational gaps, transform how we work. That conversation is worth having and, alongside it, with equal urgency, we need to have a different one: about how AI is amplifying violence against women at a scale and speed that existing legal, institutional, and technological frameworks are nowhere near equipped to handle.


The harms arrive in four distinct but deeply connected forms.



Deepfake Pornography


Deepfake pornography is the most visible and among the most devastating. According to a 2023 cybersecurity report by Home Security Heroes, deepfake pornography makes up 98% of all deepfake videos circulating online, and 99% of victims are women. The volume of this content is exploding: the number of deepfake pornographic videos produced in 2023 was 464% higher than in 2022. These videos are generated using publicly available tools, many of them free, many of them marketed openly on mainstream platforms. A photograph from a LinkedIn profile, a family photo on Instagram, a headshot from a company website: any image of a woman’s face is now raw material. No intimate image needs to have been shared. Consent has been rendered irrelevant by design.



Stalkerware


Stalkerware is the quieter weapon. Disguised as parental control apps or anti-theft software, these applications are installed covertly on a partner’s device, granting abusers access to text messages, emails, location data, browsing history, and real-time movements. Kaspersky’s 2023 State of Stalkerware report identified over 31,000 individuals globally affected by stalkerware, a figure that represents only detected cases and marks a 6% increase from the prior year. Research conducted in France by the Centre Hubertine Auclert found that 21% of intimate partner violence victims had experienced stalkerware at the hands of their abuser, and 69% suspected their smartphone had been secretly accessed. The technology does not create controlling partners. It equips them with surveillance infrastructure that was previously available only to intelligence services.


White lines of code on a dynamic, blurred background with blue and purple hues, creating a sense of speed and technology.


Algorithmic Harassment Amplification


Algorithmic harassment amplification is the mechanism by which individual attacks become coordinated pile-ons. When a woman is targeted online, platform recommendation systems designed to maximize engagement often push that content further, faster. Abuse goes viral while the victim watches helplessly. Platforms respond days later, if at all, long after the content has been saved to private devices and reshared across platforms they will never reach.



Discriminatory AI Systems


Discriminatory AI systems close the loop between online violence and structural exclusion. A University of Washington study examining AI resume-screening tools found that they favored female-associated names only 11% of the time across hundreds of real-world resumes. Stanford researchers documented widespread bias against older women across large language models, including in how those systems generate professional profiles and resumes. These systems do not make individual decisions. They shape labor market outcomes at scale, compounding exclusion that already exists and building it into the infrastructure of hiring.


Together, these four forms of harm create an ecosystem where a woman can be sexualized without her consent, surveilled by a partner, harassed into silence online, and then filtered out of opportunities by the very tools meant to make hiring fairer. Each mechanism feeds the others, and each one operates, largely, without consequence.



Numbers Do Not Bleed. People Do.


Statistics tell us the shape of the problem. They do not tell us what it costs.


More than half of deepfake abuse victims in the United States have contemplated suicide. A UN Women survey found that 41% of women in public life who experienced digital violence also reported facing offline attacks linked to it: threats, physical confrontations, harm extending into spaces they once considered safe. In certain cultural contexts, fabricated sexual images have been used to trigger so-called honour-based violence, where content shared online has resulted in extreme physical harm or death.


What the numbers cannot capture is the texture of withdrawal. A woman reduces her online presence. She stops posting. She changes jobs or leaves them. She edits herself out of public life because the cost of remaining visible has become one she did not agree to pay. The communities she served, the perspectives she brought, the work she was doing: all of it recedes. The harm lands on her, and then radiates outward to every space that loses her voice.


Choosing to pursue accountability carries its own weight. Reporting requires a woman to sit across from strangers and describe, in clinical detail, what was done to her image without her consent. It requires her to prove the images are not real, to justify her own existence on the internet, to navigate a legal and institutional system that was built long before this form of abuse existed. Many women make the entirely rational decision that the process will cost more than it recovers. That decision is not silence. It is a verdict on the system.


A person stands in dim blue light with red binary code projected on their face, creating a mysterious, futuristic atmosphere.


The Accountability Gap


Three parties share responsibility for the scale of this crisis: the platforms that host abusive content, the developers who build tools without adequate safeguards, and the legislators who have allowed the regulatory gap to widen for years. None of them have moved at the speed the harm demands.


Platforms have consistently prioritized scale over safety. Reporting mechanisms are difficult to find, inconsistently applied, and slow to act. Content that has been taken down on one platform reappears on another within hours, because the infrastructure of removal was never designed to match the infrastructure of distribution. In August 2024, the San Francisco City Attorney sued 16 websites that actively facilitated the creation of nonconsensual deepfake nudes. At the time of the lawsuit, at least 90 similar sites were already known to exist. The lawsuit did not slow the market. It documented it.


The Grok scandal of late 2025 made the developers' failure impossible to ignore. Researchers calculated that the tool integrated into X was generating sexualized or nudified images at 84 times the rate of the top five deepfake websites combined, including images of minors, before any action was taken. The tool existed. The safeguards did not.


AI systems trained on historically biased data carry that bias forward at institutional scale. When hiring algorithms, credit scoring tools, and content moderation systems learn from data shaped by decades of structural inequality, they do not neutralize that inequality. They automate it.


Legislation is beginning to respond. The EU AI Act, the UK Online Safety Act, Brazil's 2025 criminal code amendment, and the US Take It Down Act represent genuine movement. They also represent how far behind the law has fallen. Consent-based frameworks, cross-border enforcement, mandatory removal timelines, and real financial consequences for non-compliance: these are not ambitious demands. They are the minimum a functioning system of accountability requires.



What Must Actually Change


We need to name three levels of responsibility clearly, because diffusing accountability across all of them equally is how nothing gets done.


Two professionals in blue and gray suits shake hands, smiling in an office atrium with glass and white railings, suggesting agreement.


Governments


Governments must pass consent-centered legislation with clear definitions of AI-generated abuse, fast-track removal obligations for platforms, and cross-border enforcement protocols that reflect the global nature of the internet. Fragmented national laws do not stop a global network. Voluntary cooperation frameworks between tech companies and law enforcement need to become mandatory, functional, and fast. Survivors need legal support, not just resources pages.



Technology Companies


Technology companies must face legally binding requirements to proactively monitor for and remove abusive content within enforceable timelines. They must cooperate with law enforcement and face meaningful financial consequences when they fail to act. AI development teams must be diverse and inclusive at every stage: not because diversity is a branding exercise, but because homogeneous teams produce systems with homogeneous blind spots, and those blind spots have a documented pattern of landing hardest on women, on people of color, on anyone underrepresented in the data.



All of Us


All of us have a role that goes beyond passive concern. Documenting and reporting abuse matters. Supporting organizations working in this space, such as the Coalition Against Stalkerware and UN Women, matters. Demanding AI literacy in schools, so that young people understand what these tools are and how they are used, matters. Refusing to normalize surveillance in relationships, whether framed as care, protection, or love, matters. Pushing back on the idea that the erosion of women’s safety online is an unfortunate but inevitable side effect of progress matters enormously, because that framing is how accountability gets deferred indefinitely.



The Roots, the Responsibility


AI inherited misogyny from the data it was trained on, then handed it a megaphone. The reach is technological; the roots and the responsibility are political, and they demand a political response.

We are at a moment when the architecture of digital life is being built, revised, and embedded into institutions. The decisions being made now about how AI systems are designed, deployed, regulated, and held accountable will shape the conditions women live in for decades. The window for getting this right is not indefinitely open.


Women’s safety, women’s voices, and women’s full participation in public life are not secondary concerns to be addressed after the technology matures. They are the conditions on which any claim to progress depends.


The abuse is human-made, generated by AI tools at the request of humans, for the purpose of controlling, humiliating, and silencing women. It is neither inevitable nor unstoppable. Stopping it requires that we stop treating it as a technical problem awaiting a technical solution, and start treating it as exactly what it is: a rights issue, a justice issue, and a choice about what kind of world we are willing to build.


Woman in blue blazer smiles, holding a tablet in a server room with blue-lit equipment racks in the background.


Frequently Asked Questions


What exactly is AI-generated deepfake pornography and why is it considered abuse?

Deepfake pornography is sexually explicit content created by AI to place a real person's likeness into fabricated imagery without their knowledge or consent. The source material is often an ordinary photo taken from social media, a professional profile, or a personal account. Victims experience psychological trauma, reputational harm, and professional consequences that can be severe and lasting.

How does stalkerware work, and how would someone know if it is on their device?

Stalkerware is software installed covertly on a phone or computer, usually by someone with brief physical access to the device. It runs invisibly in the background, sending the abuser real-time access to messages, location, emails, browsing history, and sometimes audio or camera feeds. It is typically disguised as parental control or anti-theft software. Warning signs include faster battery drain than usual, unfamiliar apps in device settings, a partner who seems to know the contents of private conversations, or unexplained data usage. Anyone who suspects stalkerware on their device is strongly advised to contact a domestic violence organization before attempting to remove it, as removal can alert the abuser and escalate danger.


If someone becomes a victim of deepfake abuse or online harassment, what should they do first?

The first step is documentation: screenshot the content, save the URLs, and record dates and times before anything else is done. This evidence is essential for platform reports and, where relevant, for law enforcement. The next step is reporting to the platform hosting the content and requesting removal under nonconsensual intimate imagery policies, though response times vary widely. Organizations such as UN Women, the Coalition Against Stalkerware, and the Cyber Civil Rights Initiative offer practical support and guidance through the process.

How does bias in AI hiring tools affect women in practice?

AI hiring tools are trained on historical data, and when that data reflects decades of gender-biased decisions, the AI replicates those patterns at scale. Amazon's now-discontinued AI recruiting tool was found to downgrade graduates of women's colleges. The result is that women's applications are filtered out before a human ever sees them, not on merit, but because the algorithm was built on a biased past.

Are there laws protecting women from AI-facilitated violence, and do they work?

Legal frameworks are beginning to emerge. However, enforcement remains inconsistent: perpetrators operate across borders that national laws cannot easily reach, platforms have historically resisted cooperation, and survivors who report must relive their abuse through a system that was not designed with them in mind.

Why does this issue receive less mainstream attention than its scale demands?

Survivors frequently choose not to come forward because the reporting process is retraumatizing, the likelihood of justice is low, and the risk of further exposure is high. This means official figures represent only a fraction of actual harm, making the scale easy to underestimate. Technology companies have financial incentives to avoid scrutiny of how their platforms are used, and legislators often lack the technical literacy to regulate effectively.

What does women's empowerment coaching have to do with AI-facilitated violence?

At Expert on Your Life, the work is grounded in the belief that every woman is already the expert on her own life, and coaching creates the conditions for clarity and self-defined direction (precisely what technology-facilitated abuse is designed to destroy). Recovery and resilience work runs alongside advocacy: understanding the systemic forces at play, naming them clearly, and refusing to internalize the shame that perpetrators so often redirect onto victims are themselves acts of empowerment.



Written by Betty Chatzipli

Betty is an experienced mentor and Women’s Empowerment Coach with a multifaceted background in Art History, Business Development, and PR. She is the Founder & CEO of Expert on Your Life, LLC, where she offers one-on-one coaching and designs transformative programs that help women build essential skills. She also runs The Rise of She, where she writes extensively on women’s empowerment, focusing on personal growth and resilience. Contact: lifecoach@expertonyourlife.com



Disclaimer

The content of this webpage is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Expert on Your Life, LLC. is not affiliated, associated, endorsed by, or in any way officially connected with the references and information cited on this webpage. Read our full Disclaimer here.




Sign up at expertonyourlife.com to receive our newsletter and to have access to our PowerKit for more actionable ideas and resources. Find professional support and guidance on your journey to building psychological strength by booking one of the coaching sessions below.





SUPERSTRONG Coaching Space for Women
€100.00
1h 30min
Book Now

AUTHENTIC VOICE Coaching Space for Women
€100.00
1h 30min
Book Now

BURNOUT FIX Coaching Space for Women
€100.00
1h 30min
Book Now

FEAR NOT Coaching Space for Women
€100.00
1h 30min
Book Now


Empowered Me: a journal to unleash your inner power
€15.00
Buy Now

2 Comments


Guest
Apr 20

Great read! It really made me rethink what we call "neutral".

Like

This one is hard to ignore, especially the reminder that the harm isn’t accidental, it’s built in. Really powerful and unsettling in the right way.

Like
Bright street lamp shines against a teal background.

Get Our Posts in Your Inbox

We need your contact information to contact you about our services. You can always opt out of receiving our emails. For our privacy practices, please review our Privacy Policy.

bottom of page