A team of researchers from South Korea found that a commercial deep-learning model was more accurate than radiologists and also improved their performance and interobserver agreement in spotting reticular opacity.
"The findings indicate a role for [deep-learning algorithms] to facilitate the detection of reticular opacity in the early stages of ILD," wrote corresponding author Dr. Sang Min Lee, first author Dr. Wooil Kim, and colleagues from Asan Medical Center in Seoul.
Although deep-learning algorithms have been heavily utilized for detecting lung nodules on chest x-rays, these types of models may also be beneficial for the challenging task of reticular opacity in ILD, according to the researchers.
They retrospectively evaluated the performance of a commercially available deep-learning algorithm (Med-Chest X-ray, Vuno) for detecting reticular opacity on preoperative chest x-rays in 197 patients with surgically proven ILD. The patients included 130 men and 67 women with a mean age of 62.6 and who had received a chest CT exam within 30 days of their chest radiograph. The researchers also included 197 matched control cases with normal chest radiographs.
The AI software was applied to all 394 radiographs. Next, three thoracic radiologists and three radiology residents interpreted the exams without the algorithm, and then, one month later, again with help from the algorithm.
*Differences between readers alone and AI alone or readers + AI were statistically significant (p ≤ 0.002)
|Impact of AI of chest x-rays on detection of reticular opacity
||Readers + AI
In other findings, interobserver agreement improved from a moderate level (k = 0.517) to almost perfect (k = 0.870) after using the AI algorithm.
After delving further into the data, the researchers also found that utilizing AI improved the pooled reader sensitivity across disease types, and especially in mild disease. Sensitivity increased for mild disease rose from 66.7% without AI to 86.8% with the software and from 84.2% without AI to 98.8% in moderate disease. The readers also achieved 100% sensitivity for severe disease, up from 87.3%.
(A) Coronal chest CT shows bilateral lower lobe pleural-based mild reticular opacities with traction bronchiectasis (arrows). (B) Chest radiograph shows subtle corresponding subpleural opacities in basal lung zone bilaterally (arrows). Only one of six readers detected reticular opacity when interpreting radiograph alone. (C) Abnormalities annotated on deep-learning algorithm in basal lung bilaterally with probability of 0.99. Algorithm deemed true-positive for reticular opacity. Five of six readers detected reticular opacity using the algorithm. Image and caption courtesy of the American Journal of Roentgenology.
In another interesting result, the radiology residents produced higher sensitivity (97.6%) and accuracy (96.7%) than the board-certified radiologists (90% sensitivity and 94.4% accuracy) when all were using the AI software. All differences were statistically significant (p < 0.001 and p = 0.006, respectively).
"This disparity may relate to the greater frequency with which the radiology residents adopted the [deep-learning algorithm] findings," the group wrote. "The board-certified radiologists were more likely to disregard a [deep-learning algorithm] result that was discrepant from their initial interpretation and ultimately did not show as large of an improvement with [the deep-learning algorithm]."
When applied to a set of 102 exams -- 51 with ILD and 51 matched controls -- from another center, the algorithm produced 100% sensitivity, specificity, and accuracy, according to the researchers.
The team believes their research could pave the way for expanded use of chest radiography in ILD clinics.
"Given the sensitivity of [the deep-learning algorithm] for reticular opacity in mild to moderate disease, the technique could be applied for screening patients with suspected ILD, an area for which chest radiography has historically not performed well," the authors wrote.
Copyright © 2021 AuntMinnie.com