
A hybrid artificial intelligence (AI) model that incorporates dynamic contrast-enhanced MRI (DCE-MRI) images acquired prior to chemotherapy accurately predicts complete treatment response in breast cancer patients, researchers have found.
A team led by Priyanka Khanna from the National Institute of Technology Raipur in India found near-perfect accuracy in both hold-out and 10-fold cross-validations with their machine-learning model, which they suggest could help with treatment strategy for those who may not show complete response to conventional strategies. The findings were published December 11 in Measurement.
"Our early response model has the potential to identify the non-responder patients undergoing therapy, thus minimizing toxicity and facilitating alternate treatment plans," Khanna and co-authors wrote.
Artificial intelligence (AI) continues to show potential in breast imaging, whether it be detecting suspicious findings on initial imaging or forecasting treatment response. Early detection of breast cancer leads to improved prognosis via better treatment strategies and can lead to saved time and costs.
Neoadjuvant chemotherapy is typically used for stage II or III breast cancer to reduce tumors and avoid mastectomy, but some patients do not respond well to this method. DCE-MRI is typically preferred to predict therapy response, with proponents highlighting its ability to accurately measure tumor dimensions.
Khanna and colleagues wanted to explore the efficacy of an automated system to assist radiologists in predicting chemotherapy response by using baseline breast MRI tumor datasets in their model. The group integrated DCE-MRI images acquired before the start of chemotherapy into a pre-trained convolutional neural network (CNN) with machine learning, using ResNet-50 and ResNet-18 for feature extraction.
The researchers tested the model on data from 64 women who received chemotherapy for breast cancer treatment.
They found that the hybrid model had an accuracy of 99.8% and yielded an area under the receiving operator curve (AUROC) of 1 for the hold-out validation. For 10-fold cross-validation, accuracy was 99.3% while the AUROC was 0.99.
The study authors wrote that they intend to extend their work with larger and other datasets, as well as evaluate deep learning-based segmentation methods for detecting regions of interest and other areas of optimization.
"For easy access by radiologists and oncologists, the deep-learning model can be integrated into an interactive web application," they noted.




![Overview of the study design. (A) The fully automated deep learning framework was developed to estimate body composition (BC) (defined as subcutaneous adipose tissue [SAT] in liters; visceral adipose tissue [VAT] in liters; skeletal muscle [SM] in liters; SM fat fraction [SMFF] as a percentage; and intramuscular adipose tissue [IMAT] in deciliters) from MRI. The fully automated framework comprised one model (model 1) to quantify different BC measures (SAT, VAT, SM, SMFF, and IMAT) as three-dimensional (3D) measures from whole-body MRI scans. The second model (model 2) was trained to identify standardized anatomic landmarks along the craniocaudal body axis (z coordinate field), which allowed for subdividing the whole-body measures into different subregions typically examined on clinical routine MRI scans (chest, abdomen, and pelvis). (B) BC was quantified from whole-body MRI in over 66,000 individuals from two large population-based cohort studies, the UK Biobank (UKB) (36,317 individuals) and the German National Cohort (NAKO) (30,291 individuals). Bar graphs show age distribution by sex and cohort. BMI = body mass index. (C) After the performance assessment of the fully automated framework, the change in BC measures, distributions, and profiles across age decades were investigated. Age-, sex-, and height-adjusted body composition reference curves were calculated and made publicly available in a web-based z-score calculator (https://circ-ml.github.io).](https://img.auntminnie.com/mindful/smg/workspaces/default/uploads/2026/05/body-comp.XgAjTfPj1W.jpg?auto=format%2Ccompress&fit=crop&h=100&q=70&w=100)






![Overview of the study design. (A) The fully automated deep learning framework was developed to estimate body composition (BC) (defined as subcutaneous adipose tissue [SAT] in liters; visceral adipose tissue [VAT] in liters; skeletal muscle [SM] in liters; SM fat fraction [SMFF] as a percentage; and intramuscular adipose tissue [IMAT] in deciliters) from MRI. The fully automated framework comprised one model (model 1) to quantify different BC measures (SAT, VAT, SM, SMFF, and IMAT) as three-dimensional (3D) measures from whole-body MRI scans. The second model (model 2) was trained to identify standardized anatomic landmarks along the craniocaudal body axis (z coordinate field), which allowed for subdividing the whole-body measures into different subregions typically examined on clinical routine MRI scans (chest, abdomen, and pelvis). (B) BC was quantified from whole-body MRI in over 66,000 individuals from two large population-based cohort studies, the UK Biobank (UKB) (36,317 individuals) and the German National Cohort (NAKO) (30,291 individuals). Bar graphs show age distribution by sex and cohort. BMI = body mass index. (C) After the performance assessment of the fully automated framework, the change in BC measures, distributions, and profiles across age decades were investigated. Age-, sex-, and height-adjusted body composition reference curves were calculated and made publicly available in a web-based z-score calculator (https://circ-ml.github.io).](https://img.auntminnie.com/mindful/smg/workspaces/default/uploads/2026/05/body-comp.XgAjTfPj1W.jpg?auto=format%2Ccompress&fit=crop&h=112&q=70&w=112)








