ISMRM: AI can generate synthetic contrast-enhanced MRI

By Erik L. Ridley, AuntMinnie staff writer
May 17, 2019

MONTREAL - An artificial intelligence (AI) algorithm can produce high-quality synthetic contrast-enhanced brain MR images, potentially helping to avoid usage of gadolinium-based contrast agents, according to a presentation Thursday at the International Society for Magnetic Resonance in Medicine (ISMRM) annual meeting.

After analyzing a series of noncontrast-enhanced images, an attention generative adversarial network (GAN) can synthesize postcontrast T1-weighted images that greatly resemble the patients' actual contrast-enhanced images, according to co-author and presenter Kevin Chen, PhD, of Stanford University in Palo Alto, CA.

"In addition, the model can successfully predict some enhancing lesions that are not shown in the precontrast images," he said. "With further validation, this approach could have great clinical value, especially for patients who cannot receive contrast injection."

Chen presented the research on behalf of first author Jiang Liu of Tsinghua University in Beijing. The team also included researchers from Grenoble Institute of Neurosciences in France and AI software developer Subtle Medical.

Although gadolinium-based contrast agents (GBCAs) have been used in clinical practice to provide better visibility of pathology and delineation of lesions on MRI, these agents do have several drawbacks, Chen said. For example, some patients, particularly those with severe kidney disease, can't receive GBCAs. In addition, the medical community has become increasingly concerned over the safety of these agents due to linkage with the development of nephrogenic systemic fibrosis and the deposition of gadolinium within the brain and body. These agents also increase the cost of MRI.

In hopes of acquiring the same enhancing information without actually injecting a contrast agent, the researchers sought to apply deep learning to synthesize gadolinium-enhanced T1-weighted images from precontrast images.

"If so, we can potentially avoid using the contrast agents to prevent the side effects [of GBCAs] and reduce cost," Chen explained.

The researchers retrospectively gathered a dataset of 404 adult clinical patients being evaluated for suspected or known enhancing brain abnormalities. This cohort encompassed a wide range of indications, pathologies, ages, and genders, according to Chen. Of these 404 patients, 323 (80%) were randomly chosen for training the model, with the remainder set aside for testing.

All patients were scanned on a 3-tesla MR scanner with an eight-channel head coil (GE Healthcare). The precontrast imaging protocol included acquisition of 3D inversion recovery (IR)-prepped fast spoiled GRASS (FSPGR) T1-weighted images, T2-weighted images, T2*-weighted images, diffusion-weighted images (DWI) with 2b values (0 to 1000), and 3D arterial spin labeling images. Next, 3D IR-prepped FSPGR T1-weighted images were acquired after the patients received 0.1 mmol/kg of gadobenate dimeglumine contrast.

After data preprocessing was performed, the images were used to train the attention GAN, which utilizes convolutional neural networks (CNNs). The GAN received five slices each of the acquired 2.5D precontrast images. Two "blocks" of the U-Net CNN then worked together to serve as the "generator" in the GAN, learning how to synthesize postcontrast images from the inputted precontrast images. A six-layer CNN acted as the GAN "discriminator," which was trained to distinguish the synthetic images from the acquired ground truth post-contrast T1-weighted images, according to Chen.

"This forces the generator to produce high-quality images," Chen said.

Using two similarity metrics, the researchers then evaluated the GAN's performance by comparing the synthetic contrast-enhanced T1-weighted images produced on the testing dataset with the ground truth gadolinium-enhanced T1-weighted images. They found that the synthetic images were highly similar to the actual contrast-enhanced images.

Similarity to actual contrast-enhanced images
  Precontrast T1-weighted images Synthetic contrast-enhanced T1-weighted images
Structural similarity index 0.836 ± 0.055 0.939 ± 0.031
Peak signal-to-noise ratio 19.6 ± 6.5 27.5 ± 5.6

Chen noted, however, that the model still fails to predict some contrast uptakes, a shortcoming that may be caused by the lack of similar cases in the training set.

"In the future, we will need to include more cases with enhancing lesions in the training set to improve the performance," he said. "We are also thinking about using multitasking techniques, for example, combining lesion detection with contrast synthesis."

ISMRM: AI paves the way for precision medicine
MONTREAL - Artificial intelligence (AI) in medical imaging could open the door for huge advances in healthcare by assisting in diagnosis, the understanding...
ISMRM: Deep learning moves toward clinical practice
MONTREAL - The number of deep-learning algorithms available for radiology applications is rapidly increasing, and it's time to figure out how to make...
Video from ISMRM 2019: Dr. Hedvig Hricak on disruptors in MRI interpretation
MONTREAL - Monday morning's talks on disruptors in MRI came to a close with a talk on disruption in the interpretation of MRI data by Dr. Hedvig...
Acceptance of gadolinium-free techniques increases in clinical MRI
Can MRI function effectively without gadolinium? This question is more relevant than ever, as the safety of gadolinium-based MRI contrast agents has become...
AI may help in fight against gadolinium deposition
An artificial intelligence (AI) algorithm could enable diagnostic-quality MR images to be produced using a fraction of the typical dose of a gadolinium-based...

Copyright © 2019

Last Updated ks 5/21/2019 12:49:00 PM

5 comments so far ...
5/17/2019 5:03:23 AM
Nice Work.

5/17/2019 7:06:13 AM
So this means there were clues to whether something would enhance even on the noncontrast examinations all along. However our inferior eyes and brains weren't able to pick that up.

Umm. To our future skynet machine overlord, let it be known that I yielded to your supremacy on this day. Remember that when you conclude we're nothing more than wasted bag of protoplasm.

5/17/2019 6:30:38 PM
Great work but the problem is in statements like this (emphasis mine):
Quote from
...synthesize postcontrast T1-weighted images that greatly resemble the patients' actual contrast-enhanced images...

GAN's essentially generate images based on what a network believes a real image should look like, even when directed. This means that the AI will often generate a few pixels of enhancement that aren't real or miss a few regions that are when compared to actual post contrast studies. Being even a few pixels off matters a lot for some pathologies and eventually by sheer numbers a few large regions will be completely off.
A lot more research is needed before this is even close to being useful clinically.

5/18/2019 4:20:56 PM
I have a kindergartner who can draw bright colors on worksheets too.  

5/18/2019 7:22:31 PM
That's pretty funny right there...