Early this year, the Government unveiled a task force to explore the application of block chain technology and artificial intelligence and recommend regulatory gaps that should be addressed.
The 11-member task force chaired by Bitange Ndemo last month concluded a 192-page report that now heads to the Ministry of Information and Communication.
The move by the Government was probably informed by the growing use of artificial intelligence (AI) and machine learning systems in everyday life.
The trend has sparked concerns across the world, especially in developed countries where the technologies are still in their fledgeling stages.
Technology leaders such as Bill Gates have in the past warned that humans should be more concerned about the threat posed by AI.
The head of the British Science Association recently warned that artificial intelligence is a more urgent threat to humanity than antibiotic resistance, climate change or terrorism.
While much of the depiction of AI use is characterised by flashy technology such as robots and autonomous cars, billions of people around the world already use one or more forms of AI every day, often obliviously.
Online shopping site Amazon, social media feeds on Facebook and video-streaming service Netflix, for example, all rely on complex algorithms that are constantly learning from user patterns to churn out content users will find appealing and thus stay engaged with the platform for longer.
Ride-hailing apps such as Uber also rely on combining numerous data feeds from users to estimate fares while Google Map’s traffic monitoring system draws from an even larger database of geospatial knowledge to predict vehicular density and suggest the fastest routes.
It is thus no coincidence that warnings of the dangers of AI have grown in recent years as the use of the applications becomes more commonplace.
Social media companies including Facebook, Twitter and Google today find themselves in the crosshairs of regulators around the world who question their role in spreading disinformation in major elections.
Google CEO Sundar Pichai is in fact expected to testify before the US House Judiciary Committee today to respond in part to reports of bias in its underlying algorithm.
More scrutiny
Facebook is still dealing with the fallout of the Cambridge Analytica scandal where data from tens of millions of people was used to develop hyper-targeted ads that exploited psychographic traits in users.
Stay informed. Subscribe to our newsletter
These cases have fuelled calls from policymakers for more scrutiny into AI systems that are finding more application in everyday life.
Kenya’s rapid adoption of mobile technology and easy access to high-speed broadband has seen the country emerge as one of the leading African economies where the use of artificial intelligence systems is quickly taking root.
Millions of Kenyans use Facebook, Twitter and Google every day while fintechs (financial technology firms) employ data analytics and machine learning to determine the risk of lending to each individual mobile borrower.
Last week, Safaricom launched Zuri, an AI chatbot that its millions of subscribers can interact with on Facebook Messenger and Telegram.
Mobile subscribers can ask Zuri for help in making airtime top-ups, checking M-Pesa and airtime balances and reverse transactions almost as seamlessly as a real customer care agent.
Experts now warn that the regulatory approach adopted in policing sub-sectors in the telecommunications industry is not adequate for artificial intelligence.
The AI Now Institute, a research unit at New York University led by Microsoft and Google researchers Kate Crowford and Meredith Whittaker respectively, says there is a growing accountability gap in AI.
“The technology scandals of 2018 have shown that the gap between those who develop and profit from AI and those most likely to suffer the consequences of its negative effects is growing larger, not smaller,” states AI Now in its 2018 report.
The report argues that the lack of governmental regulation, insufficient governance structures in tech firms and power asymmetries between companies and the people they serve is leading to the widening gap.
“These gaps are producing growing concern about bias, discrimination, due process, liability, and overall responsibility for harm,” says the report in part.
Part of the cause of these gaps is attributed to corporate secrecy that is said to dominate the development of the AI industry.
“Many of the fundamental building blocks required to understand AI systems and to ensure certain forms of accountability – from training data, to data models, to the code dictating algorithmic functions, to implementation guidelines and software, to the business decisions that directed design and development – are rarely accessible to review, hidden by corporate secrecy laws,” say the researchers in part.
This is part of the reason regulators are often steps behind technology firms and thus have a limited capacity to prescribe remedies for problems they have little insight into.
It was only after Facebook revealed that Russian operatives bought misleading ads and created false pages to manipulate online debate did the scale of the Russian involvement in the 2016 US election become clearer.
“Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit and monitor these technologies by domain,” explains the study in part.
The study further says that while other sectors such as health and education have developed regulatory frameworks and histories over time, establishing the same overarching approach to AI regulation might not work.
“A national AI safety body or general AI standards and certification model will struggle to meet the sectoral expertise requirements needed for nuanced regulation,” notes the study.
This means rather than have the Communications Authority assume the role of regulating AI applications in Kenya across all sectors, policymakers in each sector should establish regulatory codes of AI use in their respective fields.
The report also says the Government and technology companies should ensure users are not robbed of the right to reject the application of these technologies.