Although some legal scholars believe that liability concerns could hinder adoption of artificial intelligence (AI) software in personalized medicine, a new study published in the Journal of Nuclear Medicine found that physicians who follow the technology's advice may actually be shielded from malpractice liability.
Physicians who receive advice from an AI system to provide standard care could actually reduce their medical malpractice liability risk by accepting AI recommendations, according to the results of an online survey of 2,000 adults in the U.S. The study was designed to determine, all things being equal, how potential jurors would judge malpractice liability in cases where AI was used, according to researchers led by Kevin Tobia, JD, PhD, of Georgetown University Law Center.
The researchers noted, however, that they didn't find a similar liability shielding effect when a physician rejects AI advice for nonstandard care and instead chooses to provide standard care.
"The tort law system is unlikely to undermine the use of AI precision medicine tools and may even encourage the use of these tools," the authors wrote.
In the online study, each participant was asked to read one of four scenarios in which an AI system provided a drug dosage treatment recommendation and in which subsequent physician decisions caused harm to the patient. These scenarios included standard or nonstandard drug usage and physician decisions to accept or reject the AI recommendations.
The study participants then assessed whether the treatment decision could have been made by "most physicians" and a "reasonable physician" in similar circumstances. Higher scores indicated greater agreement with the physician's decision, and therefore, lower liability, according to the researchers.
Two factors were found to reduce layperson judgment of liability: following standard care and following the recommendations of AI tools, according to the researchers.
"These results provide guidance to physicians who seek to reduce liability, as well as a response to recent concerns that the risk of liability in tort law may slow the use of AI in precision medicine," the authors wrote. "Contrary to the predictions of those legal theories, the experiments suggest that the view of the jury pool is surprisingly favorable to the use of AI in precision medicine."
In an invited perspective article, W. Nicholson Price, JD, PhD, of the University of Michigan Law School and colleagues said that liability is likely to influence physician behavior, the hospitals that implement the AI tools, and the developers who create the software.
"Tobia et al's study should serve as a useful beachhead for further work to inform the potential for integrating AI into medical practice," the authors wrote.