January 18, 2024

WHO urges cautious approach to generative AI healthcare applications

Editor's Note: 

The World Health Organization (WHO)  has released recommendations around the ethics and guidance of artificial intelligence (AI)-based large language models (LLMs) in healthcare applications. 

In a January 18 announcement, the organization recognized that AI LLMs, with their ability to analyze and interpret data, have a wide range of potential applications in healthcare and scientific research. The technology is particularly likely to play a prominent role in five areas: diagnosis and responding to patients’ questions; scientific research and drug development; medical and nursing education; clerical and administrative tasks; and helping patients investigate their own symptoms.

However, the agency also urges caution to protect the health of populations. LLMs have been known to produce false, inaccurate, or incomplete responses, and are also subject to bias which could lead to patient harm. Recommendations for mitigating these risks include ensuring medical professionals and patients are involved in developing LLMs; ensuring LLMs prioritize privacy and enable patients to opt out; and appointing a regulator to approve LLM use in healthcare with regular audit and impact assessments.  

“We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities,” said Dr. Jeremy Farrar, WHO Chief Scientist.

Read More >>

Join our community

Learn More
Video Spotlight
Live chat by BoldChat