WHO: AI use in low-income countries could be ‘dangerous’

The World Health Organization (WHO) has issued a warning that the use of large generative AI models like the one driving ChatGPT carries risks that low-income societies and health systems may not yet be prepared to address.

In a new 98-page report on the ethics and governance of large multi-modal models (LMMs), the WHO stressed that it is imperative for the technology not to be shaped solely by high-income countries working with the world’s largest technology companies.

For instance, if an LMM is asked to summarize a treatment paradigm for a disease to guide a ministry of health in a low-income country, it might reproduce an approach that is appropriate only in a high-income context, the organization suggested.

“This would render future AI technologies potentially dangerous or ineffective in the very countries that might ultimately benefit the most,” the WHO stated.

In 2021, the WHO issued its first global report on AI in healthcare and proposed six guiding principles for its design and use, as follows:

  1. Protect autonomy
  2. Promote human well-being, human safety and the public interest
  3. Ensure transparency, “explainability” and intelligibility
  4. Foster responsibility and accountability
  5. Ensure inclusiveness and equity
  6. Promote AI that is responsive and sustainable

Yet since 2021, LMMs like ChatGPT have been released and have been adopted faster than any consumer application in history, and this prompted the organization to issue new guidelines, the WHO noted.

“Even though LMMs are relatively new, the speed of their uptake and diffusion led WHO to provide this guidance to ensure that they could potentially be used successfully and sustainably worldwide,” the WHO wrote.

In brief, the report outlines major risks of use of LMMs in diagnosis and clinical care, most of which align with risks identified by radiologists, including biased data, and outlines strategies for member states to develop policies to mitigate the risks.

In addition, the WHO encouraged developers to consider the carbon and water footprint of LMMs. The organization noted that in one large company, training of a new LMM used an estimated 3.4 GWh over two months, which was equivalent to the annual energy consumption of 300 U.S. households.

“Electricity consumption will continue to increase as more companies introduce LMMs, which could eventually significantly affect climate change,” the WHO wrote.

Rohit Malpani, a public health consultant, advocate, and lawyer based in Paris, France, was lead author of the report, with guidance led by Andreas Reis, MD, co-lead of the WHO Global Health Ethics Team, and Sameer Pujari, AI lead of the WHO Digital Health and Innovation Department, both in Geneva, Switzerland.

Page 1 of 363
Next Page