Please enable JavaScript to read this content.
Artificial intelligence (AI) is celebrated as a transformative technology with the potential to revolutionise industries, enhance productivity, and address global challenges. However, the path to these advancements often obscures the labour, exploitation, and harm endured by the people working behind the scenes to make AI functional. The case of Kenyan AI trainers—workers tasked with labelling, categorising, and curating data to train algorithms—exemplifies the dangers of adopting AI without addressing systemic inequities.
AI systems, from chatbots to image recognition software, rely heavily on labelled datasets. These datasets are annotated by an invisible workforce of human trainers, often based in the Global South, where labour is cheap. In Kenya, major tech companies have outsourced this labour to workers employed by intermediary firms. These individuals spend long hours tagging violent, explicit, or deeply disturbing content, including graphic videos, hate speech, and child exploitation imagery, to teach AI systems what to filter out.
Far from being a high-tech dream job, AI training in Kenya has turned into a nightmare for many workers. Reports have surfaced of inadequate mental health support, low wages, and unreasonable performance expectations. Despite the sensitivity of their work, Kenyan AI trainers earn as little as $2 an hour, well below what their counterparts in wealthier countries might make. This exploitation is a grim reminder of how digital economies perpetuate historical patterns of colonial extraction and exploitation.
The emotional toll of moderating violent and explicit content is immense. Workers tasked with filtering graphic content often report symptoms of post-traumatic stress disorder including flashbacks, anxiety, and depression. In some cases, they receive little or no access to counseling or mental health care to address the psychological harm they endure. This exploitation mirrors the broader dynamics of the global gig economy where labour is precarious, undervalued, and outsourced to the most vulnerable.
In truth then, like many aspects of futuristic technology, what is sold to us as being fully automated requires the use of hidden labour in order for it to feel seamless to the end user. Unsurprisingly, the Kenyan government is responding to these allegations of severe trauma and underpayment by tabling a Bill in the Senate that will prevent workers from suing tech companies.
Kenya’s workforce remains, once again, both exploited and unsupported. Beyond the mistreatment of workers sent abroad to work blue collar jobs, those who remain at home are in as much danger due to the work that they do.
The exploitation of Kenyan AI trainers reflects broader structural inequities in the global adoption of AI. While AI technologies promise efficiency and growth, they often do so by externalising costs onto marginalised communities. This dynamic is not limited to content moderation; similar patterns emerge in data collection for facial recognition systems, which disproportionately target communities in the Global South for biometric data gathering, often without proper consent. One such example, which caused a major uproar not too long ago but was then quickly forgotten, was the collection of eye scans by an unknown international company. The project went on for several months, but was flagged once youth started lining up to participate in exchange for receiving a sum of Sh7,000.
Governments and organisations must step in to regulate the AI industry, ensuring fair wages, safe working conditions, and robust mental health support for data workers. Transparency is crucial: Tech companies must disclose where their data comes from, how it is processed, and who is involved in the labour-intensive stages of AI development. Ethical AI frameworks should prioritise the well-being of all stakeholders, including the hidden workforce that powers these systems.
Additionally, the global AI industry needs to recognise and address its complicity in perpetuating economic and social inequalities. AI should not only be efficient but also ethical, equitable, and humane. Until then, its promises of progress will remain overshadowed by the exploitation of the most vulnerable.
Ms Gitahi is an international lawyer