A research team from the University of California, San Diego (UCSD) used chest radiographs annotated by their radiologists to perform additional training of a previously developed convolutional neural network (CNN) for localizing and quantifying severity of COVID-19 pneumonia on chest x-rays. The group found that this radiologist-supervised transfer learning approach significantly boosted the model's detection accuracy.
What's more, the algorithm's quantitative analysis of lung involvement correlated well with a semiquantitative method for scoring the extent and degree of lung opacities on radiographs -- the modified radiographic assessment of lung edema (mRALE) score -- and showed strong prognostic ability.
"We have demonstrated how it is possible to markedly improve performance of a pretrained CNN to perform better in our clinical environment after incorporating training with a modest number of additional radiographs with high-quality annotations," first author Dr. Brian Hurt told AuntMinnie.com.
The UCSD team's project built on prior research by the institution that described a CNN for detecting and localizing pneumonia on chest radiographs. With COVID-19 ramping up in March 2020, their hospital chief medical officer and chief medical informatics officer asked the team to deploy the model to help physicians diagnose COVID pneumonia at their institution, according to Hurt.
"Immediately, we found it worked with remarkable consistency, but it had certain blind spots and difficulty similar findings related to other pathologies," Hurt told AuntMinnie.com.
As a result, the researchers sought to improve the existing model by providing it with better internal data: specifically, 1,466 radiographs that had pixel-level annotations provided by their cardiothoracic radiologists. All of these patients had also received a CT scan on the same day as the radiograph, and the radiologists referenced these CT exams in providing these regional likelihoods of pneumonia.
As the model predicts pixel-level findings of pneumonia, the researchers also hypothesized that quantifying lung involvement may be prognostic in predicting clinical outcomes in COVID-19 patients.
After the radiologist-supervised transfer learning was completed, the researchers then assessed the algorithm's ability to detect viral pneumonia in COVID-19 patients on both an external and internal dataset. In 203 patients, the researchers also performed survival analysis to determine the value of the CNN's measurements and compare these results with the mRALE score.
All numbers are area under the curve (AUC).
|Impact of radiologist-supervised transfer learning on performance of AI algorithm for detecting pneumonia on radiographs
||Algorithm after added training on radiologist-annotated radiographs
|Internal validation data
|External validation data
What's more, Dice overlap also improved after the added training, particularly in the lung bases, increasing from 0.121 to 0.433 in the right lung and 0.111 to 0.486 in the left lung. In addition, the researchers found a strong correlation between the CNN-derived fractional area of involvement and the radiologist-generated mRALE score.
Finally, survival analysis showed that the CNN and the mRALE score had similar, strong prognostic ability for mortality, likelihood of intubation, and duration of hospitalization, according to the researchers.
"Radiologists can and will have a larger role in developing and improving these algorithms as the data ecosystem around PACS and AI platforms evolve to integrate with each other," Hurt said. "Removing the technical barriers is an essential requirement to continually improve and scale these algorithms."
The researchers plan to continue improving their model.
"From a performance perspective, the findings of pneumonia on radiographs are not mutually exclusive of other pathologies (e.g., pulmonary edema), so the next steps are modifying this algorithm to simultaneously predict those other pathologies that may coexist," Hurt said.
Copyright © 2021 AuntMinnie.com