Explainable artificial intelligence elevates prostate MRI reads

Wednesday, December 1 | 8:00 a.m.-8:30 a.m. | SSGU05-4 | Room E353B
An explainable deep-learning software application can be highly accurate for characterizing prostate lesions on biparametric MRI exams, according to this scientific presentation.

Presenter Dr. Charlie Alexander Hamm of Charité University Medicine Berlin in Germany and colleagues trained a deep-learning model to detect target lesions within the segmented prostate and then classify them as prostate cancer or benign. It also justifies its predictions with the use of PI-RADS imaging features.

After detecting the target lesion and producing a voxel-wise prediction score, the algorithm then generates a color map overlay on the T2-weighted axial sequence for the radiologist.

"Thereby, it overcomes the traditional black box design of previous deep neural networks by explaining and visualizing its decision-making using PI-RADS imaging features in a custom interface," the authors wrote.

In testing, the software yielded 85% accuracy, 81% sensitivity, 87% specificity, and an area under the curve of 0.90 for classifying prostate cancer and benign lesions. What's more, it identified 11 PI-RADS imaging features with an average accuracy of 84%, sensitivity of 81%, and specificity of 76%.

"This explainable [deep-learning system] approach for prostate MRI provides a transparent decision-support tool for radiologists, which can potentially enhance the acceptance and facilitate the translation of artificial intelligence into the clinical routine," the authors wrote.

Would you like further explanation of their results? You'll want to add this Wednesday morning talk to your schedule.

Page 1 of 605
Next Page