
Wednesday, December 1 | 8:00 a.m.- 9:00 a.m. | SSNR11-2 | Room E351
Adding deep learning to spine MRI improves image quality, which may translate to better patient experiences and imaging efficacy, according to study results to be shared at the RSNA meeting.Presenter Dr. Sanders Chang of Icahn School of Medicine at Mount Sinai Hospital in New York City and colleagues used a deep-learning algorithm called SubtleMR with spine MRI images from 43 patients, comparing them with conventional spine MRI exams for a total of 98 image sets. Two radiologist readers ranked the diagnostic quality of the images (either usable for diagnosis or unusable), clarity of osseous and extraosseous structures, and any artifacts. The researchers used the kappa measure to assess interobserver agreement.
The study found that intraobserver agreement was 0.85 and that there was no significant difference in diagnostic quality between the conventional images and those processed with deep learning. More than 98% of the exams showed clarity of osseous and extraosseous structures, and conventional and deep learning-processed images did not differ significantly. The readers found artifacts in 30.1% of the conventional spine MRI images and in 36.1% of the images processed with the deep-learning algorithm.
Since the study found that images enhanced with deep learning were diagnostically comparable to conventional ones, it suggests that deep learning shows promise for improving workflow and thus, patient experience, the team concluded in its abstract.
"Spine MRI ... enhanced by deep learning methods can produce quality similar to conventionally acquired images," it wrote. "This may help improve acquisition times and in turn allow for improved patient experience and better imaging efficiency."
This paper received a Roadie 2021 award for the most popular abstract by page views in this Road to RSNA section.




![Overview of the study design. (A) The fully automated deep learning framework was developed to estimate body composition (BC) (defined as subcutaneous adipose tissue [SAT] in liters; visceral adipose tissue [VAT] in liters; skeletal muscle [SM] in liters; SM fat fraction [SMFF] as a percentage; and intramuscular adipose tissue [IMAT] in deciliters) from MRI. The fully automated framework comprised one model (model 1) to quantify different BC measures (SAT, VAT, SM, SMFF, and IMAT) as three-dimensional (3D) measures from whole-body MRI scans. The second model (model 2) was trained to identify standardized anatomic landmarks along the craniocaudal body axis (z coordinate field), which allowed for subdividing the whole-body measures into different subregions typically examined on clinical routine MRI scans (chest, abdomen, and pelvis). (B) BC was quantified from whole-body MRI in over 66,000 individuals from two large population-based cohort studies, the UK Biobank (UKB) (36,317 individuals) and the German National Cohort (NAKO) (30,291 individuals). Bar graphs show age distribution by sex and cohort. BMI = body mass index. (C) After the performance assessment of the fully automated framework, the change in BC measures, distributions, and profiles across age decades were investigated. Age-, sex-, and height-adjusted body composition reference curves were calculated and made publicly available in a web-based z-score calculator (https://circ-ml.github.io).](https://img.auntminnie.com/mindful/smg/workspaces/default/uploads/2026/05/body-comp.XgAjTfPj1W.jpg?auto=format%2Ccompress&fit=crop&h=100&q=70&w=100)






![Overview of the study design. (A) The fully automated deep learning framework was developed to estimate body composition (BC) (defined as subcutaneous adipose tissue [SAT] in liters; visceral adipose tissue [VAT] in liters; skeletal muscle [SM] in liters; SM fat fraction [SMFF] as a percentage; and intramuscular adipose tissue [IMAT] in deciliters) from MRI. The fully automated framework comprised one model (model 1) to quantify different BC measures (SAT, VAT, SM, SMFF, and IMAT) as three-dimensional (3D) measures from whole-body MRI scans. The second model (model 2) was trained to identify standardized anatomic landmarks along the craniocaudal body axis (z coordinate field), which allowed for subdividing the whole-body measures into different subregions typically examined on clinical routine MRI scans (chest, abdomen, and pelvis). (B) BC was quantified from whole-body MRI in over 66,000 individuals from two large population-based cohort studies, the UK Biobank (UKB) (36,317 individuals) and the German National Cohort (NAKO) (30,291 individuals). Bar graphs show age distribution by sex and cohort. BMI = body mass index. (C) After the performance assessment of the fully automated framework, the change in BC measures, distributions, and profiles across age decades were investigated. Age-, sex-, and height-adjusted body composition reference curves were calculated and made publicly available in a web-based z-score calculator (https://circ-ml.github.io).](https://img.auntminnie.com/mindful/smg/workspaces/default/uploads/2026/05/body-comp.XgAjTfPj1W.jpg?auto=format%2Ccompress&fit=crop&h=112&q=70&w=112)








