February 7, 2024

Session: What are the legal implications of AI in healthcare?

Editor's Note

How will healthcare regulators deal with artificial intelligence? How will malpractice law change, and who will be liable for harm derived from AI diagnosis and treatment recommendations? What can be done about bias in AI?

Even amid a surge in algorithms cleared by the FDA, all of these remain open questions. However, nurse attorney Teressa M. Sanzio, RN, MPA, JD offered yesterday’s conference attendees a welcome dose of clarity about the legal implications of AI in healthcare as well as AI in general.

In the surgical suite, AI’s massive potential encompasses radiology (image recognition), robotic surgery, identifying high-risk patients, aiding pharmacological decision-making, and helping with documentation, among other applications. Whatever the future of the regulatory framework, such powerful tools should be adopted cautiously and carefully, Sanzio said. Risks include misdiagnosis, systemic errors, and false information.

As for how these risks translate to legal considerations, Sanzio outlined four primary areas of concern: transparency, consent/disclosure, privacy, and bias – disparities in patient care resulting from discrimination being essentially built into a system. In dealings with AI system developers, healthcare leaders should insist on transparency in how AI was designed, trained, and evaluated, as well as an opportunity to assess algorithms for fairness, effectiveness, and safety. Providers also must disclose the use of AI to patients – why it is being used, how it impacts care, and, critical to privacy concerns, how patient information will be protected.

Other potential legal considerations for healthcare AI include malpractice concerns, reimbursement issues, and False Claims Act violations. Overall, Sanzio said decision-makers should have a clear use case prior to adoption and take steps to mitigate risks, particularly for systems involved in clinical decision-making.

Although oversight is relatively light now, she pointed out that regulations commonly lag behind use of new technology. Moreover, integrating AI into healthcare demands a “solid governance framework to protect patients from harm, adding “AI should enhance human intelligence, not replace it. Clinician expertise is necessary in design and deployment of any AI software, and the regulatory framework should ensure only safe, high-quality tools enter the market.”

Read More >>

Join our community

Learn More
Video Spotlight
Live chat by BoldChat