Monday, November 27 | 10:20 a.m. - 10:30 a.m. | M3-SSIN02-6 | S404
A deep-learning algorithm may be able to help reduce diagnostic errors in radiology, according to this presentation on Monday morning.
Two board-certified radiologists have designed a process that led to a transformer-based vision-language model for detecting errors in radiology, and the results of the project are the subject of this Monday morning talk.
Presenter Dabin Min and colleagues noted in the study abstract that errors occur in 3% to 4% of radiology cases daily and lead to inappropriate or delayed treatment. To test a concept intended to reduce errors, the researchers developed an Error Generator model and applied it to a MIMIC-CXR dataset.
The radiologists collaborated to design the process of generating eight types of errors, including underreading, satisfaction of search, faulty reasoning, and false-positive findings. Min’s study involved injecting diagnostic errors into chest x-ray (CXR) reports. A vision-language model, called Error Detector, was trained on the MIMIC-EG dataset, which consisted of 126,439 CXRs and corresponding error-injected reports. The team then evaluated the model's binary classification performance on three different datasets and found that their AI model trained with synthetic error-injected CXR reports demonstrated effective performance in detecting real-world diagnostic errors.
The developed model has the potential to be used as a support tool to reduce diagnostic errors and enhance patient safety, according to Min. Join this session for more details.