Achieving that goal will require diversity in perspectives and training data, awareness of unintended consequences, as well as transparency, according to Dr. Said Ibrahim of Weill Cornell Medicine in New York City and Dr. Peter Pronovost, PhD, of Case Western Reserve University in Cleveland.
"The way forward is not to shun innovations such as AI but to embrace them and explicitly use them to improve care and advance equity," the authors wrote in a Viewpoint article for the journal.
Concern has emerged lately that AI algorithms could exacerbate racial disparities in healthcare. For example, a preprint paper published in July found that AI models could recognize a patient's racial identity on medical images, even though radiologists can't. The authors of that paper said that their results were both surprising and bad news for efforts to ensure patient safety, equity, and generalizability of radiology AI algorithms.
Ibrahim and Pronovost believe, however, that the proliferation of big data and associated AI techniques offers an opportunity to detect and address diagnostic disparities. As an example of this potential, they pointed to a study published in Nature Medicine in January that detailed how a deep-learning algorithm could measure osteoarthritis severity on radiographs by predicting patient-reported pain. The model's results significantly reduced racial disparities in pain management.
"Unlike the standard severity measure, which was developed more than 20 years ago with predominantly Northern European patients, the predictive power of the algorithm improved with the greater diversity of the AI training and development sample," the authors wrote.
Although the clinical significance of this capability isn't yet clear, the authors noted that the use of knee replacement surgery for managing knee osteoarthritis is currently one of the largest racial disparities in U.S. healthcare today. And pain is a key clinical indication for surgery.
However, evidence is also emerging that AI can cause unintended harm. For example, a model that provides race-adjusted glomerular filtration rates would estimate a higher baseline rate for Black subjects, resulting in underdiagnosis and delayed treatment of chronic kidney disease, according to a Viewpoint article in JAMA from 2019.
Analytic tools such as AI offer the potential to help tailor interventions to improve the quality and value of healthcare, but they can also be double-edged swords -- possibly even being used to deny care or redline underserved populations, according to Ibrahim and Pronovost.
"Another important factor that might contribute to the risk of harm is missing or inaccurate diagnostic information in administrative datasets and electronic health records because AI and machine learning algorithms are only as good as the data on which
they are trained," they wrote. "A lack of data on social determinants of health and less-than-robust descriptions of patient symptoms may also contribute to the potential for unintended harm from AI and related methods."
As a result, diagnostic AI models should evaluate the effect of race and ethnicity as a variable and seek to minimize structural bias in diagnostic data, they said.
To reach AI's potential to address crucial issues of diagnostic errors and healthcare disparities, the technology's development needs to be guided by diverse perspectives, according to the authors. Algorithms also need to be trained using data from diverse populations, and the unintended negative consequences of the models must be identified and addressed.
"Most importantly, we must be transparent about who benefits and who may be harmed by including race and ethnicity in AI models and ensure that AI is used to decrease rather than increase inequities," the authors wrote.
Copyright © 2021 AuntMinnie.com