A quick, efficient way to spot normal chest x-rays

Sunday, December 1 | 1:00 p.m.-1:30 p.m. | AI261-SD-SUB2 | Lakeside, AI Community, Station 2
Are you looking for an efficient way to identify normal chest x-rays with high sensitivity, save your radiologists time, and expedite the generation of reports? The findings from this study could help.

Dr. Vidur Mahajan, associate director of Mahajan Imaging in New Delhi, will present results from a deep-learning model that uses a single algorithm to automate the reading of normal chest x-rays and virtually eliminate the drudgery of second reads.

The deep-learning model was trained on approximately 250,000 chest x-rays from CheXpert, an artificial intelligence (AI) model developed by the Stanford University Machine Learning Group, along with some 50,000 chest x-rays from a U.S. National Institutes of Health (NIH) dataset.

The algorithm was tested on three datasets that included a total of approximately 4,000 cases. Two of the datasets came from three outpatient imaging centers and three hospital imaging departments; the third dataset was used to validate the AI model.

Using a sensitivity threshold of 97%, specificity of the first set of chest x-rays ranged from 2% to 41% among the three datasets. After tuning the first set of chest x-rays with a single reference image from the NIH chest x-rays, specificity increased to 29% to 63%.

Based on the "drastic improvement in results," the deep-learning model can be generalized across equipment and institutions by using a "single reference image to tune the functioning of the model, hence showing potential to improve the functioning of deep-learning algorithms in general," the researchers concluded in their abstract.

Page 1 of 545
Next Page