Recently, Google employee Blake Lemoine made headlines when he claimed that the Artificial Intelligence (AI) programme, Language Model for Dialogue Application (LaMDA), had become sentient or gained human-like consciousness such as intuition and self-awareness and emotions. Scientists and philosophers have long speculated about the dangers sentient AI poses, and the latest allegations have only fuelled this anxiety.
Emerging technologies such as AI and machine learning are present in most human interactions and experiences. They have transformed how corporations, governments and societies communicate, access information, and render and receive services. These technologies are in operation in factories, hospitals, research facilities, transport systems, e-commerce and finance, social media platforms, communications infrastructure, weapons, and security systems.
Countries such as USA, China, France, Israel, Russia and United Kingdom have access to Autonomous Weapons Systems that, once activated, can select, attack, destroy or kill and wound targets devoid of human control. We must re-evaluate international laws and standards regulating warfare and the use of force in law enforcement. Also, future wars might fundamentally be very different.
Governments and corporations are increasingly turning to AI to make crucial decisions that impact individuals' lives, well-being and rights in critical areas such as criminal justice, crime prevention, healthcare, and social welfare, among others. In Kenya, mobile loan systems use AI to determine a person's creditworthiness, determining how much money one can access. Huduma Namba is poised to transform how our information is processed, which affects access to social services.
READ MORE
WTO report examines AI's impact on global trade
Education experts seek to integrate AI in libraries, education institutions
Kenya can reap dividends by upskilling workforce
One dollar investment can yield Sh1,070 in digital economy, says new study
Contrary to popular belief, AI and machine learning are not infallible. On the contrary, they are inaccurate and tend to recreate human bias.
Because of their complexity, autonomy, opacity, and vulnerability, they are hard to regulate. First, the complexity of these systems makes it difficult to pinpoint where decisions are made, the actors involved and the data relied upon, thus reducing the predictability and understanding of outcomes. Second, increased autonomy makes it challenging to allocate liability in the traditional sense. For instance, who would be liable in case of an accident involving a self-driving car which uses components and software from different firms?
Third, complex algorithms' lack of transparency in criteria used in decision-making makes it difficult to interrogate and understand malfunctions. Last, these systems rely on data at every stage, without which they will not function or misfunction. Moreover, they are very vulnerable to data breaches and cyber-attacks.
Because AI machine learning and other emerging technologies depend on massive datasets processed using complex algorithms that automatically learn as they progressively interpret and process more data, humans often do not understand their results. Such results are also unforeseeable. It creates the 'black box' effect where no human, including the designers of such systems, understands how the many variables interact to make decisions. It is dangerous because many falsely assume that their results are accurate, impartial, and science-based.
The proliferation of emerging technologies has created concern regarding many human rights, including the right to privacy, freedom of opinion, freedom of expression, non-discrimination and fair trial. To continue reaping the benefits of AI while safeguarding rights, governments, tech companies, and other stakeholders must enact specific minimum standards binding all actors to certain ethical and human rights standards.