Editor's Note
While artificial intelligence (AI) has moved rapidly into clinical application, decisions around setting rules and requirements for responsible AI use are evolving more slowly. This could pose challenges for hospitals where resources are already limited, especially for smaller hospitals. Without a unified system for evaluating AI products, hospitals could have to shoulder fragmented responsibilities with AI use, which could create inefficiencies, widen disparities, and risk liability, according to a December 3 article in JAMA authored by experts from Harvard Law School and Boston University School of Law.
They noted positive developments toward establishing structure around AI use, such as the recently issued guidance on responsible use for AI from Joint Commission and the Coalition for Health AI. Core recommendations in the guidance suggest healthcare organizations establish multidisciplinary AI governance committees, validate models on local patient data and workflows before deployment, and institute continuous post-market monitoring, per the article.
However, the authors noted that for many hospitals such guidance may be aspirational. Large academic centers may be more likely to have the capacity to rigorously validate AI tools, but a smaller community hospital without the same resources could be more likely to procure off-the-shelf technology and trust vendor assurances instead of being able to conduct an independent bias audit and risk analysis, and liability risks could increase as a result.
The authors noted that while hospital staff committees rely on state licensure agencies to evaluate the competence of every physician, no such agency presently exists for AI.
Read More >>