AI running rogue: Weaponised algorithms amplify harm against women and girls
Health & Science
By
Maryann Muganda
| Jan 25, 2026
She is soft-spoken, graceful and strikingly young, yet her voice carries both poise and pain.
Passionate about women’s rights, 19-year-old Whitney Sally Akinyi, a student at the University of Nairobi’s Faculty of Business and Management Science, speaks with confidence in person.
Online, she is far more cautious, choosing her words carefully and often saying little. That caution was learned the hard way.
When Whitney joined a WhatsApp discussion on women’s rights, she did not expect artificial intelligence (AI) to be used to twist her words,revealing just how exposed young women are online.
READ MORE
KPRL: The trump card for Kenya Pipeline in post-stake sale era
AfDB Backs Kenya's geothermal expansion with Sh2.6b loan
Public officers' vehicle financing scheme crucial for service delivery
Long-stay cargo at Mombasa Port to be moved to ease congestion
State reforms accreditation system to boost trade, market access
Safaricom partial divestiture: Endless scrutiny or bold infrastructure growth?
New bid to double Kenya-UK trade to Sh680b
Why blended finance is gaining traction in Kenya's search for sustainable funding
'We are coming for you,' Why KRA has suspended nil tax filings
EAC launches first regional framework to strengthen pandemic preparedness
As a student leader, Whitney was contributing to a sensitive discussion on safe abortion. She chose her words deliberately, focusing on women’s health and survival in contexts where unsafe abortions remain common.
She spoke about stories she had encountered of girls in high school resorting to dangerous methods such as sharp objects, toxic substances, sometimes with fatal consequences.
“I wrote a long paragraph explaining that instead of losing lives, we should focus on safe procedures,” she recalls.
What later circulated was not what she wrote.
Someone ran her message through ChatGPT and reposted it. The meaning shifted. Her argument was reframed to sound reckless and insensitive.
“It made me very uncomfortable,” she says. “My image and my voice were manipulated without my consent.”
When she confronted the person who shared the altered message, the response was dismissive. The paragraph had been too long, she was told, and AI had been used to paraphrase it quickly to keep up with the discussion.
“She even said it was the AI, not her,” Whitney says. “But the harm had already been done.”
Although the incident did not escalate into sustained online harassment, it marked a turning point. Whitney withdrew from the group and became far more guarded in digital spaces.
“I am still a fan of AI,” she says. But people misuse it.”
The phrase “weaponised algorithms” refers not only to the actions of a few bad actors, but to systems that are designed, deployed or governed in ways that make abuse easier and accountability harder.
Weak safeguards, slow responses and engagement or profit-driven models allow harm to spread rapidly.
In such cases, AI does not merely mirror existing violence against women and girls, it actively amplifies it.
During the Africa Cup of Nations season, she was in a WhatsApp group where members were experimenting with the Tecno Ella AI tool that merged photos of users with images of football players.
“What shocked me,” she says, “was that someone tried to prompt Ella to undress both people in the image.”
The tool refused to comply. But what disturbed her more was the reaction.
“People were just laughing. Even the admin laughed. It was treated like a joke.”
She contrasts this with reports from other platforms, particularly X, where AI tools such as Grok have been accused of generating non-consensual sexualised images of women and children when prompted.
“That casual laughter is dangerous,” Whitney says. “It normalises violence.”
Romantic interest
Cindie*, a 25-year-old English teacher who requested anonymity, told The Sunday Standard that she was targeted in 2025 by a man who had expressed romantic interest in her, an interest she did not reciprocate.
“He asked me for explicit pictures, and I refused,” she says. “The next thing I knew, he was sending me AI-generated nude images of me. I was shocked and very angry. I didn’t even know how to process it.”
When she confronted him, he dismissed the incident as a joke.
“He laughed it off, but I felt deeply violated,” Cindie says.
She blocked him immediately and cut off all communication, but the emotional impact lingered long after the interaction ended.
Public figures, too, have not been spared. During Tanzania’s 2025 election period, President Samia Suluhu Hassan was targeted with AI-generated and manipulated content that circulated widely on TikTok and other platforms.
The material, ranging from mocking caricatures to sexualised fabrications, was repeatedly resurfaced, keeping her in the public eye through humiliation rather than leadership.
Grok’s misuse triggered a global backlash. In January 2026, Malaysia and Indonesia temporarily blocked the tool, while Britain’s media regulator opened an investigation into X and French authorities reported the company to prosecutors. Several countries have since moved to criminalise so-called “nudification apps.”
Experts argue these incidents are not accidents, but the result of deliberate design and business choices.
Platforms often frame AI-enabled abuse as the work of a few rogue users, but tools like Grok were built to manipulate images of real people, with meaningful safeguards introduced only after public outrage forced a response.
Some of the most powerful, and easily abused, features were also placed behind paid subscriptions, quietly turning access to harm into a premium product.
In Kenya’s high-engagement digital environment, where online activity spikes around elections, football seasons and viral moments, shocking or sexualised content keeps users scrolling longer, feeding advertising revenue, data extraction and subscription sales.
Once such content begins to trend, algorithms often amplify it further, allowing platforms to profit from virality even when that attention is driven by gendered abuse.
For Brian Omwenga, an AI expert with the Tech Innovators Network (ThiNK), incidents involving tools like Grok raise deeper questions about how artificial intelligence is built, governed and held accountable.
“What we are seeing goes to the heart of what we define as safe and trustworthy AI,” Omwenga explains.
“There is also the issue of algorithmic bias, the bias introduced by the person who ultimately defines the moral framework of the system, whether that is the owner or the developer.
When an AI system reflects the ethical blind spots or moral flexibility of its creators, Omwenga says, it does more than mirror societal harm, it can legitimise it.
“You end up with a situation where someone decides it is acceptable for AI to undress women, so the system is allowed to do exactly that,” he says.
Through ThiNK, Omwenga has been pushing for responsible AI development anchored in accountability and traceability. One of the network’s core principles is that users should not be able to carry out serious or harmful actions on digital platforms without being identifiable.
“Traceability matters,” he says. “We also advocate for conformity frameworks — checks that assess safety, trustworthiness and accountability before AI tools are deployed, not after they cause harm.”
Kenya is often described as a continental leader in AI policy and adoption, but Omwenga notes that several African countries had already developed AI frameworks before Kenya formalised its own strategy.
In March 2025, the government launched the National AI Strategy 2025–2030, a policy blueprint aimed at positioning Kenya as a regional AI hub while balancing innovation, economic growth and ethical governance.
The strategy explicitly acknowledges emerging risks, including misuse and digital harm.
Still, Omwenga warns that policy alone will not solve the problem.
“Yes, this is progress,” he says. “But regulation on its own is not enough. If we regulate blindly, we risk stifling innovation. At the same time, doing nothing is not an option.”
He argues that developers and platform owners must either commit to meaningful self-regulation or accept oversight when clear breaches occur.
“Through our community of practice, we are trying to define what African AI ethics should look like from a bottom-up, expert-driven perspective,” he says.
“If we fail to create a safe, responsible and trustworthy AI environment, the consequences will affect everyone, including our children.”
Following widespread outrage over Grok being used to generate sexualised and exploitative images, X issued a public safety commitment reaffirming its zero-tolerance policy on child sexual exploitation, non-consensual nudity and unwanted sexual content.
The platform said it removes high-priority violative content, including child sexual abuse material, takes action against offending accounts and reports serious cases to law enforcement where required.
X and its AI developer, xAI, also announced new safeguards, including restrictions preventing Grok from editing images of real people into revealing clothing such as bikinis. Image creation and editing through Grok on X are now limited to paid subscribers globally, a move the company says improves accountability, alongside geoblocking measures in jurisdictions where such content is illegal.
For Maureen Oduor of The African Women’s Development and Communication Network (FEMNET), AI-enabled abuse is part of a broader pattern of misogyny amplified by technology.
Social media platforms and these new technologies are increasingly being used to routinely target women and girls through coordinated online attacks, sexualised insults, rape and death threats which intensify around elections or moments of feminist advocacy with the intention of pushing women out of digital civic spaces.
“Technology is not neutral,” Oduor says. “These digital harms reflect and reinforce existing patriarchal power relations.”
A 2025 report by Women Advocates Research & Documentation Centre (WARDC), UN Women and FIDA found that 99.3 per cent of women and girls in Kenya had experienced technology-facilitated violence.
Another UNFPA study focusing on higher learning institutions in Nairobi found that nearly nine in ten students had experienced or witnessed such abuse.
Heightened risk
Young women, students, those without legal or economic power, and women outside Nairobi face heightened risk, especially where abuse occurs in Kiswahili, Sheng or other local languages is often missed by moderation systems.
Kenya’s legal framework, including the Computer Misuse and Cybercrime Act, the Data Protection Act and the Sexual Offences Act, offers avenues for redress in cases of online abuse.
Under Section 22 of the Cybercrime Act, cyber harassment is a criminal offence punishable by fines of up to Sh20 million or imprisonment of up to 10 years.
But enforcement is slow and digital evidence disappears quickly. Many survivors withdraw to protect their dignity rather than pursue lengthy legal processes.
While the scale of TFGBV can feel overwhelming, legal practitioners say survivors are not without options.
At FIDA-Kenya, cases involving digital abuse, including AI-manipulated images, online harassment and non-consensual sharing of intimate content, are increasingly finding their way into legal aid clinics.
Dennis Otieno-Obor, Senior Legal Counsel in FIDA-Kenya’s Access to Justice Department, says reporting TFGBV follows the same process as other forms of gender-based violence.
“The first step is to report the matter at the nearest police station and obtain an occurrence number,” he explains. “The police will then ask for evidence showing that the violence occurred through a digital platform.”
Such evidence, he says, is critical. Screenshots of abusive messages, manipulated images, links to websites where the content appears and records showing how the material was shared are often the backbone of an investigation.
Digital evidence
Where content is sent directly to a survivor’s phone, those messages should be preserved immediately.
“One of the biggest challenges is that digital evidence disappears very fast,” Otieno-Obor notes.
“Messages can be deleted within hours. So survivors are advised to take screenshots immediately, ensuring the image shows a clear timestamp, the source and the context of the violation.”
In cases where content is hosted across multiple platforms or linked through different websites, documenting the digital trail becomes even more important.
This, he says, helps investigators understand how the content moved from one platform to another, particularly where phishing or mirror sites are involved.
Beyond reporting, FIDA-Kenya approaches TFGBV cases from two fronts: legal and psychosocial.
“We do not look at these cases as purely legal,” Otieno-Obor says. “There is a strong psychosocial component because many survivors experience deep emotional and mental distress, especially where their dignity has been violated through sexualised or nude images.” Survivors who seek help through FIDA-Kenya are offered counselling to address trauma, anxiety and fear, alongside legal guidance on what to expect as their case progresses through the justice system.
Where matters proceed to court, FIDA-Kenya can support survivors either as a watching brief or as counsel representing victims under the Victim Protection Act.
Although TFGBV cases are slowly entering the legal system, Otieno-Obor admits that very few have gone to full conclusion.
“In many cases involving non-consensual nudity, survivors choose to settle quietly outside the legal process,” he says. “The issue of dignity weighs heavily. Many do not want prolonged public exposure, even if the law is on their side.”
As a result, compensation and precedent-setting judgments remain rare, limiting opportunities to test the strength of existing laws against emerging digital harms.
Where AI-enabled abuse involves companies or platforms based outside Kenya, Otieno-Obor says the High Court offers a viable route.
“The High Court of Kenya has original and unlimited jurisdiction,” he explains. “This allows it to issue orders that can be enforced beyond Kenya’s borders, including declarations of illegality, compensation claims or injunctions against foreign companies.
Such cases, he adds, are better suited for the High Court than lower courts due to the complexity and cross-border nature of digital platforms.
Despite existing legal frameworks, FIDA-Kenya believes Kenya is still grappling with TFGBV largely at a theoretical level.
“Our laws are only tested when cases move through the system to conclusion,” Otieno-Obor says. “The challenge is that many survivors never reach that point. Content disappears, evidence is lost, or victims withdraw due to fear, stigma or fatigue.”
This silence, he warns, makes it harder to identify legal gaps or push for reforms grounded in lived experience. FIDA-Kenya has yet to file a dedicated petition on TFGBV, but Otieno-Obor says this could change as more survivors come forward.
“We are hopeful that with greater awareness and support, more people will gain the courage to report,” he says.
“It is only through real cases that the judiciary, lawmakers and society can fully understand where the law works and where it fails.
As AI-driven abuse becomes more sophisticated and harder to trace, Otieno-Obor stresses that early documentation, timely reporting and survivor-centred support remain the strongest tools currently available.
“Technology is moving fast,” he says. “But for now, evidence, courage and support systems are what give survivors a fighting chance.”
Still, women are not simply retreating from digital spaces.
Many are actively pushing back, documenting abuse before it disappears, locking down their online privacy, watermarking personal images and leaning on trusted networks when platforms are slow to respond.
Informal support systems, from campus groups to closed WhatsApp circles, have become critical spaces where survivors share information, warning each other about emerging threats and finding solidarity.
Advocates and researchers argue that responding to harm after it happens is not enough.
An African feminist approach to AI governance calls for systems built around dignity, consent, care and accountability, asking whose values shape these technologies and whose realities are routinely ignored.
Rather than treating women’s safety as an afterthought, this vision insists that harm prevention must be a core design principle from the start.
This thinking is reflected in Towards Afro-feminist AI, a governance handbook developed by African researchers that argues for AI systems grounded in local contexts and lived experiences.
Initiatives such as FemAI Africa are also advocating for gender-responsive AI policies across the continent, while global forums like the Internet Governance Forum have increasingly centred discussions on AI and gender justice.
Rights activists
Kenyan and regional leaders, including technologist Angela Oduor Lungati and digital rights activists such as Sandra Kwikiriza, are pushing for technology ecosystems that prioritise inclusion, accountability and care, before harm occurs, not after.
“We need digital literacy,” Masista says. “People need to understand what AI is, how it works and how it can be misused,” says Neema Masitsa, a communications advisor at the Kenya ICT Action Network (KICTANet).
She further notes one major challenge is language.
Many AI moderation systems struggle to detect abuse in Kiswahili, Sheng and other local expressions.
“Some of the most harmful insults lose their severity when translated into English,” she says.
KICTANet is working with platforms to train systems to recognise abuse in Kenyan contexts, but Masitsa argues that relying on foreign platforms is not enough.
“We don’t have X Kenya or Facebook Kenya,” she says. “We need our own systems to respond faster.”
For Whitney, the stakes are personal. AI can amplify voices—but without accountability, it can also silence them.
This article was produced as part of the Gender+AI Reporting Fellowship, with support from the Africa Women’s Journalism Project (AWJP) in partnership with DW Akademie.