Autonomous artificial intelligence (AI) algorithms aren't close to being safe enough to replace radiologists, and the U.S. Food and Drug Administration (FDA) should hold off for now on developing regulatory pathways for autonomous AI in radiology, according to the American College of Radiology (ACR) and the RSNA.
Autonomous algorithms would include AI models that identify normal radiological examinations or rule out critical diseases, according to the organizations. To date, there's a lack of research-based criteria for ensuring that these types of algorithms are generalizable across clinical sites, and there's also a considerable body of published research that suggests they often perform poorly across heterogenous patient populations, according to ACR Chair Dr. Howard Fleishon and RSNA Chair Dr. Bruce Haffty.
Fleishon and Haffty wrote to the FDA in response to a February FDA workshop on the evolving role of AI in radiological imaging.
"In light of the broad heterogeneity in imaging equipment and image acquisition protocols and the lack of a definable mechanism to ensure the longitudinal performance of the algorithm, we are concerned that autonomously functioning algorithms of the varieties discussed at the workshop would pose a significant risk to patient safety," they wrote.
If the goal is to remove physicians from image interpretation, then the public must be assured that AI algorithms will be as safe and effective as the physicians they replace, according to Fleishon and Haffty. That also includes the ability to incorporate available context and identify secondary findings.
"We believe this level of safety is a long way off, and while AI is poised to assist physicians in their care of patients, autonomously functioning AI algorithms should not be implemented at this time," they wrote. "The value that human interpretation with independent medical judgment brings to patient care cannot currently be replaced."
Ensuring the safety and effectiveness of autonomous AI algorithms in radiology would require more rigorous testing, surveillance, and other oversight mechanisms throughout the total product life cycle, according to Fleishon and Haffty. In their letter, Fleishon and Haffty also recommended that the FDA wait on creating regulatory pathways for autonomous AI until currently approved AI algorithms have achieved broader penetration in the marketplace and their safety and efficacy have been documented in a "supervised" manner.
Instead, it would be best if the FDA focused its regulatory resources for now on autonomously functioning AI software that address areas of clinical value to radiologists and their patients, according to Fleishon and Haffty.
"Algorithms that assist physicians in population health management by incidentally detecting and quantifying potentially undiagnosed chronic diseases would be an excellent way to begin bringing autonomous AI into radiological care," they wrote in the letter.
In population health management, for example, algorithms could identify and quantify important diseases such as pulmonary emphysema, hepatic steatosis, high body mass index, and osteoporosis in patients undergoing imaging for other reasons, such as trauma or inflammatory diseases, they said. Although radiologists frequently find these diseases during interpretation, these results often wind up being buried in radiology reports and don't make it into the patient's medical record problem list for future evaluation.
AI algorithms, however, could identify and quantify these disease processes and then transmit that information to the patient's care team via the electronic health record, according to Fleishon and Haffty.
"We believe these types of autonomously functioning AI will enhance patient care and potentially save lives if treatment can be instituted early," they wrote.