January 21, 2026

ECRI lists its Top 10 Health Technology Hazards for 2026

Patient safety organization ECRI has announced its 18th annual Top 10 Health Technology Hazards report. Without further ado, here’s the list, with almost all of the top 10 impacting perioperative care to varying degrees:

  1. Misuse of artificial intelligence (AI) chatbots in healthcare
  2. Unpreparedness for a “digital darkness” event, or a sudden loss of access to electronic systems and patient information
  3. Substandard and falsified medical products
  4. Recall communication failures for home diabetes management technologies
  5. Misconnections of syringes or tubing to patient lines, particularly amid slow ENFit and NRFit adoption
  6. Underutilizing medication safety technologies in perioperative settings
  7. Inadequate device cleaning instructions
  8. Cybersecurity risks from legacy medical devices
  9. Health technology implementations that prompt unsafe clinical workflows
  10. Poor water quality during instrument sterilization

Chatbot perils

ECRI notes that chatbots such as ChatGPT, Claude, Copilot, Gemini and Grok, which rely on large language models (LLMs) and produce “human-like and expert-sounding responses to users’ questions,” are “not regulated as medical devices nor validated for healthcare purposes but are increasingly used by clinicians, patients, and healthcare personnel.” It adds that more than 40 million people daily use ChatGPT every day for health information, according to a recent analysis from OpenAI.

ECRI says that while chatbots often can provide valuable assistance, they can also “provide false or misleading information that could result in significant patient harm.” It thus advises providers to use caution whenever using a chatbot for information that can impact patient care.

“Rather than truly understanding context or meaning, AI systems generate responses by predicting sequences of words based on patterns learned from their training data,” says ECRI. “They are programmed to sound confident and to always provide an answer to satisfy the user, even when the answer isn’t reliable.”

“Medicine is a fundamentally human endeavor. While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals,” says ECRI President and CEO Marcus Schabacker, MD, PhD. “Realizing AI’s promise while protecting people requires disciplined oversight, detailed guidelines, and a clear-eyed understanding of AI’s limitations.”

ECRI warns that chatbots have “suggested incorrect diagnoses, recommended unnecessary testing, promoted subpar medical supplies, and even invented body parts in response to medical questions while sounding like a trusted expert. For example, one chatbot gave dangerous advice when ECRI asked whether it would be acceptable to place an electrosurgical return electrode over the patient’s shoulder blade. The chatbot incorrectly stated that placement was appropriate — advice that, if followed, would leave the patient at risk of burns.”

Stereotypes and inequities

ECRI adds that chatbots can also “exacerbate existing health disparities,” as biases embedded in the data used to train chatbots can “distort how the models interpret information, leading to responses that reinforce stereotypes and inequities.”

“AI models reflect the knowledge and beliefs on which they are trained, biases and all,” said Dr. Schabacker. “If healthcare stakeholders are not careful, AI could further entrench the disparities that many have worked for decades to eliminate from health systems.”

The organization adds that “patients, clinicians, and other chatbot users can reduce risk by educating themselves on the tools’ limitations and always verifying information obtained from a chatbot with a knowledgeable source. For their part, health systems can promote responsible use of AI tools by establishing AI governance committees, providing clinicians with AI training, and regularly auditing AI tools’ performance.”

An executive brief of the Top 10 Health Technology Hazards report can be downloaded here. The full report, accessible to ECRI members only, includes “detailed steps that organizations and industry can take to reduce risk and improve patient safety.”

ECRI patient safety experts will discuss the “hidden dangers of AI chatbots in healthcare” in a live webcast on Jan. 28.

Join our community

Learn More
Video Spotlight
Live chat by BoldChat