AI tops recent imaging informatics highlights

2019 01 03 21 20 8068 Artificial Intelligence Ai 400

Artificial intelligence (AI) has been the driving force behind the most important recent advances achieved in imaging informatics, according to a presentation last week at the Society for Imaging Informatics in Medicine (SIIM) annual meeting in Denver.

For the second year in a row at the SIIM annual meeting, William Hsu, PhD, of the University of California, Los Angeles and Dr. Po-Hao "Howard" Chen of the Cleveland Clinic shared their review of the most important imaging informatics developments over the last year or so.

This year's list of most notable advances ranged from the rise of AI-powered image enhancement to a new regulatory process from the U.S. Food and Drug Administration (FDA) for regulating continuously learning AI-based medical software devices.

GANs

In addition to supervised learning techniques that rely on human annotation of images, significant growth has occurred recently in unsupervised deep-learning algorithms, which learn unknown patterns and find hidden patterns in data that haven't been labeled by humans, according to Hsu. Furthermore, reinforcement learning methods -- in which algorithms automatically correct their own errors as they learn -- have also expanded. This includes generative adversarial networks (GANs), an approach that has recently come to the forefront.

An article published online March 3, 2018, at arXiv.org reported that GAN-based synthetic medical image augmentation could improve the performance of convolutional neural networks (CNNs) for classifying liver lesions, Hsu said. He also highlighted a paper published online April 17, 2018, in IEEE Transactions on Medical Imaging that reported the benefits of using a GAN for low-dose CT image denoising.

In that study, the researchers found a GAN could significantly reduce image noise while retaining critical information on CT images.

"Adversarial networks really have gained popularity as a powerful tool for generating synthetic data for denoising, image-to-image translation [such as going from MR to CT or PET/CT], [and] even anomaly detection," he noted.

Data augmentation

William Hsu, PhD, of the University of California, Los Angeles.William Hsu, PhD, of the University of California, Los Angeles.

Hsu said that researchers are also increasingly performing data augmentation to significantly increase the size of their datasets for training algorithms. Transfer learning -- in which an algorithm is pretrained on public image repositories and then retrained for a specific clinical task, such as differentiating benign and malignant breast masses -- has also become more popular, according to Hsu.

In other network architecture trends, improvement has also been seen in activation functions -- deep-learning features that convert the weighted sum of inputs into an output value that's conveyed onto the next layer of the network, Hsu explained. These newer activation functions avoid problems with achieving convergence, he said.

"This type of work has enabled deeper networks that really allow us to go forward with the work," Hsu added.

Explainable AI

The field has also seen a flurry of activity related to "explainable" AI, in which algorithms show users how they arrived at their conclusion. A growing number of visualization techniques -- such as gradient-weighted class activation mapping (Grad-CAM), activation maximization, and t-distributed stochastic neighbor embedding (t-SNE) -- are being explored, according to Hsu.

"This really helps us understand what the model is focusing on," he said.

In a representative study published December 17, 2018, in Nature Biomedical Engineering, researchers from Massachusetts General Hospital in Boston found that an explainable AI algorithm yielded high sensitivity for detecting intracranial hemorrhage on unenhanced CT exams.

Importantly, the number of comparative AI studies evaluating how models can impact clinical outcomes has increased, he said.

"Some of the work that's out there evaluates machine learning in terms of when radiologists use it versus [when they do] not," he said. "These types of studies, as we go forward, will be critical to really see what the impact of machine learning is."

Hsu also noted the need for better quantitative metrics beyond pixel-level comparisons for assessing image quality.

Nonpixel applications

Dr. Howard Chen of the Cleveland Clinic.Dr.  Howard Chen of the Cleveland Clinic.

Of course, advances in AI haven't only occurred in image-based applications. In nonpixel-based applications, machine-learning algorithms have especially progressed in areas such as using natural language processing (NLP) for unstructured data; text mining of reports; and extracting information such as time stamps, anatomic parts, and disease entities, according to Chen.

NLP can be used to process radiology reports for ground truth to support the training of AI algorithms and for quality improvement or business analytics initiatives, Chen said.

As an example of important recent NLP research, Chen cited a study in the Journal of the American College of Radiology that found NLP could be used to examine hepatocellular carcinoma surveillance CT reports and help predict whether a patient will require further imaging down the road.

Also, researchers from NYU Langone Health reported in an article published online May 24, 2019, in the Journal of the American College of Radiology that NLP was highly sensitive and specific for identifying incidental pulmonary nodules mentioned in radiology reports.

The recent research shows that NLP can perform at acceptable levels and is fully explainable, Chen said. He also noted that in a dataset with a low number of positive findings, even over 90% sensitivity yields low positive predictive value. Furthermore, NLP can be complementary to structured reporting in a clinical setting, Chen said.

Proposed FDA framework

Scientific advances weren't the only big news in imaging informatics over the last year and a half, according to Chen. In April, the FDA proposed a new regulatory framework for handling modifications to AI-based software.

This is important because traditional software that's considered to be a medical device has typically been locked prior to marketing, he said. The FDA noted, however, that the strength of modern machine learning and AI is that they can continuously learn. Consequently, these algorithms may provide a different output than what they were initially cleared to provide.

These algorithms can be modified in areas including performance (using more training data), input type (being used with a new scanner), and intended use (a new indication), according to Chen.

Under the proposed total product lifecycle approach, the vendor would need to submit software as a medical device prespecifications (SPS) and algorithm change protocol (ACP) during the initial premarket review. The SPS defines regions of future potential changes for the software, while the ACP defines methods to achieve and appropriately control the risks of the anticipated modifications, Chen explained.

"As long as you make the changes that are within the region that is in the SPS in a way that is compliant with the ACP, the vendor or the algorithm developer would no longer need to submit for reapproval," he said.

The premarket evaluation, as well as prespecifications and algorithm change protocol approval, would depend on the class of medical device, its intended use, and whether the vendor utilizes a good machine-learning process, Chen added.

Page 1 of 364
Next Page