Among several strategies tested for deploying the algorithm, the group discovered that the algorithm was most useful when it deferred difficult cases to clinicians, rather than the other way around -- a finding that suggests how best to leverage the model's strengths to improve ARDS diagnosis and treatment at the bedside, wrote first author Negar Farzaneh, PhD, and colleagues.
"If the AI model reviews a chest x-ray first, this allows the AI system to make decisions for up to 79% of cases, letting physicians focus on the most challenging subsets of chest x-rays," the group wrote. The research was published April 8 in npj Digital Medicine.
In a previous study, the researchers found that a deep convolutional neural network (CNN) algorithm they designed to detect ARDS on chest x-rays could achieve expert physician-level performance. However, there is a growing gap between studies describing the capabilities of AI and those that investigate how or when to integrate the systems into real-world clinical practice, the researchers noted.
To address this gap, the team investigated four strategies for clinical deployment of the AI model and physician collaboration to determine their potential impact on diagnostic accuracy. The test set consisted of 414 chest x-rays from 115 hospitalized adult patients who developed acute hypoxic respiratory failure.
The strategies include the following:
- AI-aided physician -- physicians provide all the diagnoses which were replaced with the AI output score only if a physician was uncertain.
- Physician-aided AI -- AI was the primary diagnosis tool, and its diagnoses were replaced with physician reads only if AI was uncertain.
- Average of physician and AI ratings -- this approach used the average ratings of both AI and physicians on a scale of 1 to 8.
- Weighted average of physician and AI ratings -- weights were determined by maximizing the average validation accuracy.
The two best performing strategies for maximizing diagnostic accuracy were the physician-aided AI strategy and that of taking a weighted average of the physician and AI model ratings, both of which showed an accuracy of 87%.
Yet although these two approaches were nearly equivalent, there are important differences, the researchers explained, noting that the weighted average of the physician and AI model would require physicians to review every chest x-ray, while the physician-aided AI strategy would require them to review only about a fifth -- a strategy that would allow for "offloading" of a significant percentage of the workload to AI so that physicians could focus on more challenging exams, the researchers added.
Schematic diagram of the proposed strategies for physician-AI collaboration. (a) Shows the proposed AI-aided physician framework, (b) shows the physician-aided AI framework, and (c) shows the average scoring framework. It can be both basic and weighted average. Image and caption courtesy of npj Digital Medicine
through CC BY 4.0
"Having the AI model interpret a chest x-ray first and defer to the physician if it is uncertain has near-equivalent accuracy [and reduces] the number of chest x-rays that require manual physician review to determine if ARDS is present by 79.2%," they wrote.
Although the study suggests that AI and physician expertise can complement each other, further inquiry is warranted.
"More research is still needed to investigate the collaboration of AI and physicians in other diagnosis tasks and environments," the group concluded.
Copyright © 2023 AuntMinnie.com