The World Health Organisation (WHO) is releasing new guidelines on the ethics and governance of large multi-modal models (LMMs), a rapidly developing generative artificial intelligence (AI) technology with applications in the healthcare industry.
To ensure the proper use of LMMs to promote and protect population health, the guidance lays out over 40 recommendations for governments, tech companies and healthcare providers to consider.
Text, video and image inputs are just a few data types that LMMs can process and produce a wide range of outputs that are not just restricted to the inputted data type. They are distinct from other robots in that they can mimic human speech and even perform tasks for which they were not specifically designed.
More LMMs than any other consumer application in history have been adopted and in 2023, several platforms - including ChatGPT, Bard and Bert - entered the public's consciousness.
Dr Jeremy Farrar, WHO Chief Scientist, says, "Generative AI technologies have the potential to improve healthcare, but only if those who develop, regulate and use these technologies identify and fully account for the associated risks."
"We need transparent information and policies to manage the design, development and use of LMMs to achieve better health outcomes and overcome persisting health inequities."
Keep Reading
- Endometriosis: Understanding, diagnosing and treating the condition
- Clinical officers: What government must do for us to return to work
- Why that negative HIV test result could actually be positive
- Research uncovers link between eye changes and chronic kidney disease
The new WHO guidance outlines five broad applications of LMMs for health.
These include diagnosis and clinical care, such as responding to patients' written queries and patient-guided use, such as investigating symptoms and treatment.
LMMs can also help with administrative and clerical duties like recording and summarising patient visits in electronic health records.
In addition, they can be used in scientific research and drug development to help identify new compounds, as well as in medical and nursing education to provide trainees with simulated patient encounters.
Though LMMs are beginning to be used for certain health-related purposes, there are known risks of creating statements that are untrue, erroneous, biased, or incomplete, which could be harmful to individuals who rely on such information when making health-related decisions.
The guidelines also outline more general health system risks, such as how the most cost-effective and easily accessible LMMs may inadvertently lead to "automation bias" among patients and healthcare providers. This, inevitably, results in the wrong choice being assigned to an LMM or errors going unnoticed that would have otherwise been detected.
Like other AI systems, LMMs are susceptible to cybersecurity threats that could jeopardise patient data, the reliability of these algorithms and the delivery of healthcare as a whole.
WHO emphasises the need to involve multiple stakeholders in all phases of the development and implementation of these technologies, including their supervision and regulation, including governments, technology companies, healthcare providers, patients and civil society, to create safe and effective LMMs.
"Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs," says Dr Alain Labrique, WHO Director for Digital Health and Innovation in the Science Division.