Although deep-learning algorithms are increasingly being utilized in radiology when making critical medical decisions, many of these models were trained on datasets that aren't representative of the entire U.S. population. Consequently, they may not perform as expected on underrepresented patient groups, according to the research team led by presenter Dr. Jason Adleberg from the Icahn School of Medicine at Mount Sinai.
The group trained and tested models using data from over 55,174 chest x-rays in the MIMIC-CXR database, as well as on data from the CheXpert database and Mount Sinai Health System in New York City. The algorithms produced areas under the curve (AUC) as high as 0.999 for biological sex, 0.917 for age decile, 0.709 for insurance status, and 0.907 for ethnicity.
"This technology can be used to ensure the equitable creation of medical imaging datasets, and to illuminate certain anatomical regions that may differ in different groups of patients," the authors wrote.
If you're seeking an inclusive AI educational experience at RSNA 2021, you won't want to miss this Tuesday talk.