The feeling of anxiety engulfs James Oyange every time he attends a football match. The screams and cheers from fans bring back dark memories of the content he moderated for Tiktok.
He had been employed by a business processing company called Majorel in Nairobi as a content moderator. His job entailed weeding out harmful content such as child pornography, bestiality, necrophilia ( being sexually attracted to corpses), and self-harm among others. The purpose of the job was to shield Tiktok users from harm's way.
“The work I did as a moderator was distressing. At first, I did not know what I was getting myself into. I had initially applied for the position of a Swahili-speaking call centre agent. It was only until the interview stage that I was introduced to content moderation, not knowing what it was. I was so afraid to go to sleep after moderating content because there is something about the human mind that puts you in the position of a video that you moderated.”
According to a recent report from the Reuters Institute, Kenya is a global leader in TikTok usage. The report indicates that 54 per cent of Kenyans utilise TikTok for diverse purposes, including content creation and sharing videos.
Oyange says that as a result of the content he moderated for the application, he has developed insomnia and post-traumatic stress disorder (PTSD). He claims that Majorel, the company he worked for, did not provide sufficient mental health help.
In a rebuttal, Majorel Kenya CEO Sven De Cauter told The Standard that his company provided moderators with mental health therapists licensed by the Ministry of Health.
“There is a whole framework that supports the wellbeing of agents and this is not something that starts while they are in production but when they are going through the interview. They do tests, validations and pre-screening. There is a whole support system. There is a mental assessment that is done to make sure the moderators are able to cope,” said Cauter.
- Endometriosis: Understanding, diagnosing and treating the condition
- Clinical officers: What government must do for us to return to work
- Research uncovers link between eye changes and chronic kidney disease
- Understanding HIVAIDS testing and counseling standards in Kenya
Keep Reading
He said most of the content was moderated by the Artificial Intelligence (AI) system before it is passed to moderators.
“It is details of what AI cannot distinguish that comes to a moderator so as to have the human overview and cross-check as well.”
38-year-old Richard Mathenge was a content moderator for Open AI’s Chat Gpt. In the peak era of Artificial Intelligence, Samasource, a US-based business process outsourcing company, contracted him as a content moderator.
Mathenge was tasked to lead and train a team of moderators who would be involved in sifting through and cleaning up Open AI’s Chat Gpt content in 2021. For months, Mathenge and his team poured through tonnes of violent and explicit content.
“The targets were so exaggerated. We were meant to do about 250 to 350 pages in content in one day. It was not brief, it was very long and very disturbing. Most of it was child pornography,” he said.
Having moderated a lot of explicit content on children, Mathenge says he found himself projecting paranoid narratives onto them.
“It was very traumatic just coming back home and seeing children running around. You remember the content that you read and try to reconcile and ask yourself, what if this content was about these children,” he narrated.
For Kings Korodi, a former Chat Gpt moderator, content that mostly gave him mental torture was about children.
“When you see a young one molested, it eats your inner soul. As time went by, I started dreaming about things I was reading. At times you are sleeping and this thing comes to mind so you wake up. You start praying to get over it. That kind of content changes you as a person because this is not what you are used to. You fall into depression without knowing. I bought antidepressants that could fit my pocket,” Korodi told The Standard.
Mathenge and three other content moderators who worked for Samasource have since filed a petition at the National Assembly to initiate investigations into the welfare and conditions of service of young Kenyans working for big tech companies.
In their petition, they want the National Assembly to enact legislation regulating outsourcing of harmful technology work and protecting workers who are engaged through such arrangements.
“Bringing investors to the country to deal with the issue of unemployment for the youth is a good thing. But that needs a conducive environment for the people to work in. It is not right. If you do both the good and right thing, then we will have no reason to fight,” said Mathenge.
Open AI declined to respond to our emails for this story.
In a statement, Samasource told The Standard it had exited content moderation in March 2023 and does not currently work with OpenAI.
“The company did not complete its pilot project with OpenAI and it does not currently work on ChatGPT. We would not be able to speak to any measures OpenAI is taking for ChatGPT moderators as Samasource is not involved in that,” a Samasource spokesperson said.
Kenya continues to be a global hotspot for tech jobs. A survey by Kenya Private Sector Alliance (KEPSA) in 2021 showed that at least 1.2 million Kenyans work online.
All that is needed for one to be a content moderator is good language skills and a good use of the internet, hence suitable for graduates transitioning into the job market.
Mercy Mutemi, a Nairobi-based digital rights lawyer, has been advocating for better jobs from big tech to the Kenyan youth. She is currently representing Oyange and other moderators in court.
“On paper, it doesn’t seem like anything harmful, it looks like something prestigious, I am doing work for meta, tiktok. But in the real sense, this is work can damage you,” Mutemi told The Standard.
Even as the government banks on the digital revolution to create one million jobs for Kenyans, Mutemi says it is important for parliament to look into the quality of the jobs the youth are being offered.
More than 200 former content moderators are suing Facebook and Samasource, at a Nairobi court. They want financial compensation alleging poor working conditions, which include insufficient mental health support and low wages.
If the Kenyan moderators win the case, it would not be the first time Facebook has settled such issues.
In 2021, Facebook agreed to pay a settlement of Sh6 billion in a court case in California. In the suit, the moderators said the company failed to protect workers tasked with moderating disturbing content from the mental health impacts of the job.
Caroline Gichie, a clinical psychologist at the Mental Hub, said watching violent content can result to trauma which could cause mental disturbances.
“Seeing distressing content for many days on end can cause Post Traumatic Stress Disorder (PTSD) which is a direct result of trauma. There are, however, other coping mechanisms that determine whether it will develop into a full-blown disorder or not. When one is at PTSD level, there are symptoms that can present themselves such as flashbacks, physical symptoms such as heart palpitations, paranoia about other human beings, nightmares, or a heightened response such as flight or fright,” “ Gichie said.
Musa Nyandusi, the Secretary Occupational Safety and Health in the Ministry of Labour, said employers have a responsibility to provide a safe working environment for their employees.
Nyandusi said the government had recognised some loopholes in the occupational safety and health policy.
“We have recognised the big challenge we are facing in the workplace new challenges emerging in the workplace,” he said.