AI shows promise for accurately segmenting oropharyngeal squamous cell carcinoma (OPSCC) tumors on MRI without the use of contrast agents, according to a study published February 25 in Diagnostics.
The finding could shift imaging workflows for patients who cannot safely receive gadolinium-based contrast, co-first author and data scientist Ramkumar Rajabathar Babu Jai Shanker of the University of Chicago told AuntMinnie.
"For patients, [the study results matter] because gadolinium-based contrast agents aren't risk-free," he said. "Patients with impaired kidney function are at risk for nephrogenic systemic fibrosis, and allergic reactions, while uncommon, do occur. There's also growing attention to gadolinium deposition in the brain with repeated exposures, which is particularly relevant for cancer patients who undergo serial imaging. If AI can extract clinically useful tumor segmentations from non-contrast scans, that could open a path toward contrast-sparing workflows in radiotherapy planning."
OPSCC is among the fastest-growing (in incidence) head and neck malignancies, and accurate tumor delineation is critical for radiation treatment planning, wrote a team led by co-first author Viktoriia Zarovniaeva, MD, also of the University of Chicago. Yet manual segmentation is time-consuming and subject to inconsistency between clinicians. Deep-learning tools have shown promise in automating this process, but until now, researchers have not compared the performance of contrast-enhanced versus noncontrast MRI sequences for automated segmentation, the group noted.
Zarovniaeva and colleagues analyzed pretreatment MRI exams from 39 HPV-positive OPSCC patients using the nnU-Net deep learning framework. They trained and evaluated three separate model configurations: contrast-enhanced T1-weighted fat-suppressed (CE-only), T2-weighted fat-suppressed (T2-only), and a combined CE + T2 approach, then measured each model's performance using Dice scoring as the primary metric, as well as Surface Dice at 2mm, volumetric agreement, and blinded qualitative reader evaluations.
The group reported that median Dice scores were consistent across all three configurations, at 0.63 for the combined approach, 0.6 for T2-only, and 0.55 for CE-only. The team also found that neither paired comparison (i.e., combined vs. T2-only [p = 0.11] or CE-only vs. T2-only [p = 0.98]) reached statistical significance.
The qualitative reader evaluation showed acceptability rates that were statistically identical across configurations. The researchers did note that all three models showed a tendency toward undersegmentation, with median gross tumor volume differences of -1.31 mL (combined), -1.29 mL (T2-only), and -1.49 mL (CE-only). They also reported that tumor size was a key predictor of segmentation success, with small tumors under 3 mL showing markedly lower Dice scores and rejection rates as high as 50% in the qualitative review.
The study findings underscore AI's potential to reshape the imaging process, according to Babu Jai Shanker.
"Most of the conversation around AI in radiology focuses on detection and diagnosis -- that is, finding things on scans," he told AuntMinnie. "But this study points to a different and potentially underappreciated role: AI could help simplify the imaging process itself. If models can perform reliably on less complex acquisitions, that has downstream implications for scan time, patient safety, cost, and accessibility, while still keeping physicians firmly in the loop."
The group has made its code publicly available on GitHub.
Access the full study here.





















