The dataset utilizes 1,083 chest x-ray images from the MIMIC-CXR database and adds transcribed radiology report text, radiologist's dictation audio, and eye-gaze coordinates, according to the researchers.
"All these multimodal data types were collected while a radiologist read the x-ray image, with eye gaze tracked, and the dictation of the report recorded and synced with the eye gaze coordinates," wrote Mehdi Moradi of IBM Research in a blog post on Springer Nature. The researchers shared all the details in an article published online March 25 in Scientific Data.
Of the 1,083 patients, 349 had pneumonia, 330 had congestive heart failure, and 359 were normal. The dataset can help researchers to build domain-expert-guided interpretable AI models, according to Moradi.
"As we demonstrated in the article, we were able to produce more accurate activation maps using this data," Moradi wrote. "In addition, eye-track data and the localization of the disease could be used to devise novel loss functions for training neural networks and perhaps, a deeper synergy between radiologists and AI."
Copyright © 2021 AuntMinnie.com