Big Tech on the spot amid rise in digital violence
Sci & Tech
By
Maryann Muganda
| Apr 15, 2026
A stark glimpse of online harassment behind the screen. [File Courtesy]
Imagine logging into your social media account, only to find a barrage of threatening messages, hateful comments, and strangers attempting to access your personal information.
Your phone buzzes, unread messages and WhatsApp notifications piling up faster than you can process.
This was the reality for 23-year-old social media manager Flavier Momanyi. Her only “mistake” was posting a TikTok video of herself dancing.
“It was just a simple video of me dancing,” she recalls. “But someone took a screenshot and shared it in school political groups. That is when everything started.”
READ MORE
Idea behind Local Content Bill good, but challenges lie ahead
After clinching Sh377b in trade deals, State now faces harder part
Come clean on hidden debt or no deal, IMF's new ultimatum to Ruto
How consistency, reliability spur grow of your business
Key sectors that could lift Kenya out of 'hustle economy'
Presidential advisor urges partnerships to make women owned businesses bankable
State issues draft regulations for issuance of stablecoins
Motorists to bear the brunt as pump prices jump by Sh28 per litre
IMF cuts 2026 global growth forecast on Mideast war
Stanchart clashes with pensioners, RBA in Sh7 billion payout battle
At the time, Momanyi was a student leader at Maseno University. The online harassment was entangled with campus politics. Her classmates, some of whom were her political opponents, weaponised the video against her.
“They used to troll me even in the class WhatsApp group,” she says. “Some criticised my leadership; others just attacked me for no reason.”
The trolling escalated, with memes and stickers from her images and insults in English, Kiswahili, and Sheng circulating widely.
“It was so bad,” she says. “At some point, I used to ask myself, what did I do to them?”
“It was purely online, but it felt like it was everywhere,” she explains. “Even when I walked around campus, people recognised me. They would look at me differently.”
The psychological toll was immediate. “At first, it makes you question yourself, your sanity,” she says. “You start wondering if you are the problem.”
The abuse was largely driven by men, she notes, though a few women also participated. Some attackers went as far as comparing her to animals, a dehumanising tactic often used in online abuse.
Momanyi’s experience is far from unique. A 2024 United Nations Population Fund (UNFPA) rapid study found that nearly 90 per cent of young adults enrolled in Nairobi’s tertiary institutions have witnessed technology-facilitated gender-based violence, with 39 per cent having experienced it personally.
It also found that female students are disproportionately affected, with 64.4 per cent having experienced at least one type of online violence, compared with 35.5 per cent of male students.
Similar patterns are observed globally, with research by the International Research and Exchanges Board (IREX) showing that more than 85 per cent of women have witnessed online abuse, while 66 per cent have experienced it themselves.
Fridah Nyaga, executive director of the Coalition on Violence Against Women (COVAW), recounts a case of a university student who had shared intimate photos with her boyfriend. When they broke up, the images were used against her.
“He circulated the photos in class WhatsApp groups,” Nyaga says. “By the time she came to us, she was in a very bad mental state.”
The young woman, who had been on a scholarship, dropped out of university due to the trauma and public humiliation. The threats did not stop even after the images were shared.
“These are not isolated cases,” Nyaga explains. “They happen in multiple forms at the same time: cyberbullying, harassment, and image-based abuse.”
She adds that survivors often encounter stigma when they seek justice.
“When they go to report, they are asked questions like, ‘What were you wearing?’ or ‘Why don’t you take down your photo?’”
Such responses reflect the normalisation of online violence even among institutions meant to protect survivors.
“Most survivors who try to report TFGBV (technology-facilitated gender-based violence) face dismissive attitudes at police stations. They are often treated as if their cases are less important than others,” explains Dr Mugambi Laibuta, a data governance professional and advocate of the High Court.
Even when cases are escalated to specialised units like the DCI’s cybercrime division, survivors still encounter systems that lack awareness and capacity to handle digital abuse.
“The evidence is electronic; the platforms are digital,” he says. “Police officers, prosecutors, and even judicial officers need to know how to collect, interpret, and present such evidence. Right now, they often don’t.”
“It’s not just TFGBV,” Dr Laibuta emphasises. “This is a systemic problem affecting how criminal justice is administered across different crimes.”
While Kenya has laws, including the Computer Misuse and Cybercrimes Act, the Data Protection Act, and the Sexual Offences Act, gaps hinder justice.
“Kenya is very good at documenting laws—we have frameworks, policies, and legislation, but implementation is the problem. There is a lack of political will, especially when it comes to gender issues,” says Carol Werunga, senior programme manager at Urgent Action Fund-Africa.
“As a country, we do not even have a legal definition of TFGBV,” Nyaga notes. “Yet the problem keeps evolving.”
AI is creating a new wave of image-based sexual abuse.
“You can post a normal photo online,” Nyaga says, “and someone somewhere uses it to generate explicit content without your knowledge.”
Much of the abuse takes place on platforms owned by global technology companies such as Meta, TikTok, and X. These platforms design the systems, algorithms, and moderation tools that shape what content is amplified, ignored, or removed.
Moderation and reporting tools exist, but they often fall short in contexts like Kenya, where language, culture, and local dynamics are not fully understood. As a result, harmful content can spread rapidly, while survivors struggle to have it taken down or addressed in time.
In response, governments, feminists and digital rights organisations are now shifting focus from reaction to prevention, working to embed safety directly into digital systems.
Groups such as COVAW, IREX, and ARTICLE 19 are championing a “safety by design” approach. This model ensures that protections against abuse are integrated into platforms from the outset, rather than introduced after harm has already occurred.
One emerging solution is the development of a local-language lexicon, a structured dictionary of abusive terms used online.
“Many moderation systems fail because they do not understand local language,” Nyaga explains. “For example, Sheng or coded terms are used to harass women.”
By documenting local expressions, civil society groups are helping train AI systems to detect harmful content in context—moving beyond generic moderation tools that often miss culturally specific abuse.
Despite this, accountability remains limited. Technology companies set their own rules, and enforcement is often inconsistent, leaving gaps in protection.
Meta says its Community Standards prohibit bullying, harassment, and gender-based abuse and that it relies on AI and human reviewers to enforce these rules.
Without stronger regulation and clearer accountability frameworks, responsibility remains diffused, making it difficult to hold any single actor to account.
The risk, experts warn, is that as technology evolves, harm will continue to outpace the systems meant to prevent it, pushing more women out of digital spaces, silencing their voices and limiting their participation in public life.
Tom Osborn, CEO and co-founder of Shamiri Institute, which provides mental health support in schools for adolescents aged 12 to 24, an age group increasingly exposed to online harm, sees potential in using AI to support survivors.
“There is a huge opportunity to use AI tools to connect young people to support, guide them through what they are experiencing, and help them understand their options,” he says.
Experts emphasise that survivors must remain at the centre of any response. COVAW runs a toll-free helpline offering counselling and guidance, ensuring survivors receive emotional support before navigating legal processes.
“If they want to pursue justice, which we always encourage, then they need the mental capacity to go through that journey,” says Nyaga.
Access to justice, however, remains a significant barrier, with high costs, lengthy court processes, and limited support forcing many survivors to abandon cases.
“Justice in Kenya is expensive,” says Nyaga. “We found that many cases are withdrawn not because survivors have found justice, but because they get tired.”
“To address this, we work with pro bono advocates who provide free legal aid services and walk with survivors until they access justice,” she explains. “Because if we do not do that, they will give up, and that only empowers the perpetrator.”
Addressing TFGBV also requires sustained funding, something many organisations say is increasingly uncertain.
Werunga points to a growing trend where organisations working on gender justice face financial constraints, with some funders scaling back or withdrawing support altogether.
Nevertheless, some donors, like Agence Française de Développement (AFD) in Kenya, which announced an €800,000 (Sh120 million) fund to tackle technology-facilitated gender-based violence in March, continue to support online safety initiatives.
“We cannot allow these alarming statistics to continue,” said Anne-Gaël Chapuis-Mirol, Country Director of Agence Française de Développement (AFD) in Kenya, while announcing the fund.
The funding, to be implemented by Urgent Action Fund-Africa and COVAW, is part of a broader €4 million initiative supporting feminist organisations across seven African countries.
This article was produced as part of the Gender+AI Reporting Fellowship, with support from the Africa Women’s Journalism Project (AWJP) in partnership with DW Akademie. The journalist used AI tools as research aids to review and summarise relevant policy and research documents and extract key statistics. All analysis, editorial decisions and final wording were done by the reporter, in line with The Standard Group’s editorial standards.