4 steps for physicians to influence AI legal issues

2018 01 31 19 14 5078 Law Books 400

The adoption of artificial intelligence (AI) in clinical practice can engender complex medicolegal issues, but physicians can take certain actions to shape how liability determinations will play out, according to an article published online October 4 in the Journal of the American Medical Association.

A team of authors led by W. Nicholson Price II, PhD, JD, of the University of Michigan Law School in Ann Arbor, MI, offered four suggestions for how physicians can influence the future of medical AI liability:

  • Learn how to better use and interpret AI algorithms, including in what situations an available medical AI algorithm should be applied and how much confidence should be placed in its recommendation.
  • Encourage professional organizations to take active steps to evaluate practice-specific algorithms.
  • Ensure that administrative efforts to develop and deploy algorithms reflect what is truly needed in clinical care, and advocate for safeguards to ensure that external AI products are rigorously vetted before procurement.
  • Check carefully with malpractice insurers to determine how they cover the use of medical AI in practice.

"Collectively, physicians and their hospital systems may be able to make demands for changes in terms of insurance coverage to better accommodate the needs of a future of AI-enabled medicine," they wrote.

In terms of current law, there is essentially no case law on liability involving medical AI. However, the authors concluded -- after analyzing eight possible scenarios that could occur from the use of medical AI technology -- that physicians would currently only face liability if they don't follow the standard of care and an injury results from that action.

This analysis "suggests an important implication for physicians using medical AI to aid their clinical decisions: because current law shields physicians from liability as long as they follow the standard of care, the 'safest' way to use medical AI from a liability perspective is as a confirmatory tool to support existing decision-making processes, rather than as a source of ways to improve care," they wrote.

The challenge, however, is that the current law incentivizes physicians to minimize the potential value of AI, according to the authors.

"If the medical AI performs a task better than the physician, such as recommending a higher dosage of a drug, it will provide some results different than the physician," they wrote. "The difference will increase if, in the future, some medical AIs perform better than even the best physicians, a goal for some algorithms. But because threat of liability encourages physicians to meet and follow the standard of care, they may reject such recommendations and thus fail to realize the full value of AI, in some cases to patients' detriment."

However, the legal standard of care may shift over time, the authors noted.

"Physicians should watch this space because it may change quickly," they wrote.

Page 1 of 367
Next Page