A team of authors led by Davide Piccini, PhD, of Siemens Healthcare and the University of Lausanne in Switzerland found that IQ-DCNN, a deep convolutional neural network designed to assess cardiac MR image quality, could achieve a comparable performance for assessing image quality and yielded precision that was in the range of human intraobserver and interobserver agreement.
"The proposed deep-learning framework can be used to compare image volumes that are reconstructed with different algorithms from the same acquisition and to select the image with the best quality," the researchers wrote.
The researchers trained and tested IQ-DCNN using 3D whole-heart cardiac MRI exams from 424 randomly selected patients referred for general clinical cardiac MRI to the University Hospital of Lausanne between 2013 and 2016. All studies were acquired on a Magnetom Aera 1.5-tesla MRI scanner (Siemens). Of the scans, 324 were utilized for training and 100 were set aside for testing.
This test set was also assessed by two readers; the first reader had 11 years of experience in coronary MR angiography and the second had six years of experience. The human readers scored the image quality on a scale of 0 (nondiagnostic) to 4 (excellent diagnostic value). The first reader also scored the images a second time more than one month after his original assessment.
The researchers then compared the IQ-DCNN's results with that of the readers. After performing regression analysis, the researchers determined that IQ-DCNN's performance was within the range of human intraobserver and interobserver agreement. It also had very good agreement (R2 = 0.78, κ = 0.67) with the human observer.
To determine if the algorithm could be used to assess quality differences in the same dataset under different conditions, the researchers also explored the use of IQ-DCNN in combination with an iterative compressed sensing reconstruction process on 69 raw datasets. The model's image quality assessment during compressed sensing reconstruction also had very good agreement with the human expert, according to the authors.
Above and below are two examples of automated image quality assessment by IQ-DCNN during iterative compressed sensing reconstruction. The quality of the whole-heart image volumes corresponding to the four reconstructed respiratory phases is assessed at each iteration on the standard image quality scale of 0 (poor quality) to 4 (excellent). Phase 1 represents the most end-expiratory phase, while phase 4 refers to end-inspiration. All images courtesy of Radiology: Artificial Intelligence.
In a set of 16 datasets in which both the expert reader and IQ-DCNN evaluated the image quality differences between pairs of reconstructed respiratory phases, the algorithm agreed completely with human assessment in identifying the phase with the highest quality when there was a significant difference in image quality between the pairs.
"Applied to an iterative compressed sensing pipeline where different respiratory phases are reconstructed, the algorithm was not only capable of identifying improvements in image quality as a function of the increasing iterations but also as a function of the respiratory level," the authors wrote. "In all, our findings suggest that a variant of this algorithm may be exploited to define termination criteria of an iterative process to improve time efficiency and to help identify data with the least amount of motion blurring."
Copyright © 2020 AuntMinnie.com