Big test for tech giants as court hears Facebook's landmark case

Earlier this year, an investigative report found that social networking giant Facebook, owned by Meta, was allowing hate speech and misinformation to snowball into violence in Ethiopia's ongoing ethnic clashes where hundreds continue to lose their lives and thousands displaced.

Analysis of social media content by various organisations including the European Institute of Peace (EIP) and the International Committee to Protect Journalists (ICPJ) found cases of fake news weaponised against polarised communities that served to derail debate and stoke more violence months before last year's regional elections.

"Analysis of each fake news story's sources identified social media as responsible for 73 per cent of the sample examples," said the EIP in a report. "Of these, just under 80 per cent appeared to originate on Facebook."

The vast majority of samples in the survey were in non-English languages and were targeted at rural-dwelling populations that have little access to traditional and alternative sources of information.

"The vast majority of the fake news examples were in Amharic (81 per cent) and English (16 per cent)," explained the research findings in part. "Messaging in these languages is like to capture the largest possible audience as both Amharic and English are widely spoken and likely to be second languages of non-native speakers."

The report confirmed other criticisms against the social media giant from other countries that have for years condemned the double standards in Facebook that allow powerful figures and governments to rally their support base using disinformation to advance their autocratic agenda on sections of their populations.

Last December, a class action suit was brought against Facebook in the United Kingdom and the US for the company's role in amplifying hate speech and calls for genocide against Rohingya Muslims in Myanmar.

Facebook's parent company Meta faces more than Sh1.7 billion in claims from families that underwent human rights abuses facilitated by the social media platform.      

The UN's Independent Investigative Mechanism for Myanmar found that politicians in Myanmar and military leaders posted, sponsored and spread incendiary speech through Facebook inciting hate speech and violence against the Rohingya.

Facebook itself acknowledged this shortfall following an independent human rights impact assessment report on the role the platform played in Myanmar.

"The report concludes that, prior to this year, we weren't doing enough to help prevent our platform from being used to foment division and incite offline violence," said Facebook in a post published in November 2018. "We agree that we can and should do more."

Three years later, however, Facebook is yet to address the concerns around the platform and its skewed algorithms that favour extreme and incendiary content to boost engagement and drive profits.

Facebook is not alone. With just seven weeks left to Kenya's general elections, the concern is mounting that the country's regulators are unable or unwilling to rein in social media giants including TikTok, Twitter and YouTube, proven to turbo-charge the spread of disinformation that derails healthy political debate.  

A report by the Mozilla Foundation released earlier this month found dozens of content that violate Kenya's National Cohesion and Integration Act and TikTok's own community guidelines were being spread to millions of users unchecked.

"We found content on the platform which, in the context of Kenya’s electoral history, is problematic and could fall into the category of incitement and hate speech along ethnic lines," explains Odanga Madung, a fellow at the Mozilla Foundation and author of the report.

"Many of the videos we reviewed contained explicit threats of ethnic violence especially targeting members of ethnic communities that are based within the Rift Valley region," explains Madung.

The research was compiled after a review of more than 130 videos from 33 accounts that have been viewed more than four million times.

The videos use images and captions that play on the narratives of the 2007/2008 post-election violence that saw hundreds of people lose their lives and many more displaced.

Prior to this latest report, Madung worked on another study that found that Spanish-based organisation CitizenGO, which is reported to have links to far-right-wing groups sponsored and coordinated disinformation campaigns on Twitter.

The campaigns labelled the Reproductive Healthcare Bill, 2019 as the "Abortion Bill" and coordinated attacks against members of parliament, activists and journalists using trolls and memes spreading disinformation.

Despite the growing evidence that the lack of adequate controls is having a damaging effect on the country's public debate and could distort the upcoming general elections, Kenyan regulators appear unable or unwilling to act.

In the first place, tech giants have different terms of service for users in Africa than those in Europe and the US. Secondly, despite being used by more than three billion people in over 100 languages, Facebook dedicates 87 per cent of its moderation efforts to posts made in English and the company has in the past admitted its capacity to police disinformation and hate speech in other languages is limited.

"We are constantly looking at how we can uphold and balance between allowing for free expression and at the same time safety of other users," explained Mercy Ndegwa, Public Policy Director for East and Horn of Africa at Meta.

"In the case of the Kenyan elections, we have scaled up our capacity in trying to make sure we have local teams that have local understanding," she said. "Swahili and English tend to be prominently used on our platforms and we have teams that look at and escalate content that could be problematic." 

Tomorrow, the High Court in Kenya will hear oral submissions from Meta in a case where Facebook alongside Samasource Kenya, a third-party contractor hired to provide content moderation services, is being sued for violating labour practices.

The legal demand by Nzili and Sumbi Advocates is drawn on behalf of Daniel Motaung, a former content moderator allegedly fired by Samasource in violations of labour rights.  

The demand letter seen by Weekend Business accuses Facebook and Samasource of luring Mr Motaung and his colleagues for a content moderation job without informing them of the nature of the posts they would be moderating.

"The advertisement misrepresented the role, and caused our client to understand that it was administrative," explains the demand letter. "No disclosure was given that the content moderators being sought were to work as Facebook content moderators."

Samasource and Meta are also accused of neglecting the mental well-being of the moderators employed to review and remove content that is often graphic and disturbing from the platform.

"Sama and Meta failed to prepare our client for the kind of job he was to do and its effects," states the demand letter. "The first video he remembers moderating was of a beheading. Up to that point, no psychological support had been offered to him in advance."

Meta, Facebook's parent company, will be relying on a defence long used by tech giants in developing countries whenever they are confronted with the liability of their company practices.

The company has argued that the Kenyan High Court does not have jurisdiction to determine the case, since the company apparently does not trade in Kenya.

This argument has been used before by Google when it faced a tax challenge from the Kenya Revenue Authority, (KRA) as well as Uber when Kenyan drivers sued to be recognised as the company's workers and not just contractors. 

However, a precedent-setting case last year could bring an end to this line of defence.

Uber Kenya Ltd has sought to settle a five-year case with its drivers in Kenya out of court after the High Court ruled that the company can be sued by drivers for reducing fares on its ride-hailing platform.

Milimani Commercial Court Judge Francis Tuiyott declined a suit by Uber Kenya seeking to have its name struck out of the suit by Uber drivers who accuse the tech giant of breach of contract in the introduction of discounts on the popular App.

Uber Kenya's failed argument was that it was a different entity from Uber B.V. which signed the drivers onto its platform and as such was not a party to the agreement between the two or liable for any breach of contract.

The taxi drivers and operators however argued that Uber Kenya presented itself as the local representative of Uber and that other details such as email exchanges between the parties gave the impression that they were related entities.

“What I hear the plaintiffs be saying is that the defendant, Uber Kenya, presented itself as the party to whom the plaintiff contracted,” stated Justice Tuiyott in his ruling. “That, in so far as Kenya is concerned, it is Uber Kenya who has been enforcing the terms of the Contract."