The adoption of deep learning-based image reconstruction methods, the proposal of an artificial intelligence (AI)-friendly regulatory policy, and new approaches aimed at training more robust AI models rank among the most important advances in 2019 for imaging informatics.
Dr. Eliot Siegel of the University of Maryland believes the implementation of deep learning in image reconstruction algorithms for use on commercial CT, MRI, and PET/CT scanners is the biggest development in imaging informatics in 2019.
"[This] will have a major impact over the next five years in reducing the amount of time required for MRI and PET/CT studies, will reduce the loss of texture which is associated with iterative reconstruction algorithms in CT, and will reduce the amount of radiation for CT and PET/CT substantially," Siegel told AuntMinnie.com. "New scanners are just beginning to incorporate deep-learning reconstruction, and third-party companies are offering this technology for older machines."
The potential of this technology was shown, for example, in a recent paper in Radiology: Artificial Intelligence that found deep learning-based reconstruction improved the quality of abdominal CT images for evaluating hypovascular hepatic metastases.
"Techniques that improve image reconstruction/time/quality will initially have higher clinical value than those that utilize an algorithm to detect subtle disease," Siegel said. "In the case of this [research] article and in my opinion, producing high-quality images that depict liver metastatic disease is more important at this point than algorithms that can detect liver metastatic disease from lower-quality images. Of course, algorithms that help radiologists and clinicians find disease are also very exciting and will be highly impactful in the next few years, and the combination of both will be quite powerful."
Another study found that a deep-learning method could reconstruct accelerated cardiac MR images that look just as good as those created with another deep-learning method, but with only one-tenth of the parameters, said Dr. Po-Hao "Howard" Chen of the Cleveland Clinic.
The U.S. Food and Drug Administration (FDA) proposal in April of a new regulatory framework for AI algorithms that continuously learn also was a very important development, continuing the FDA's recent efforts to streamline and speed up the approval process for AI software. The proposed framework shares the FDA's vision for an abbreviated approval process for algorithms that can continuously learn from local data after deployment. Previously, the agency has generally only cleared "locked" algorithms that don't adapt as they are used.
"No algorithm that allows local 'learning' has yet been FDA-cleared, but the FDA is now apparently open to the concept," Siegel said. "This is in addition to the other efforts at the FDA to accelerate the process of clearing AI algorithms such as the FDA precertification program that was announced in 2018, which streamlines clearance by approving the process used at a company rather than just at the specific algorithm level."
An increasing number of radiology AI algorithms have hit the market in 2019, particularly for triage applications such as identifying -- and then alerting radiologists -- to the likely presence of critical conditions such as acute intracranial hemorrhage, acute stroke, and pneumonia, according to Chen.
Important progress has also been made in developing interpretable and explainable algorithms. These are able to explain -- in terms that radiologists and clinicians can understand -- how they arrived at their findings, said William Hsu, PhD, of the University of California, Los Angeles (UCLA).
Following up on several similar presentations at recent meetings, Hsu and Chen will team up on talk at RSNA 2019 in Chicago to review the most important imaging informatics advances. That presentation will be given on Friday, December 6, from 8:30 a.m. to 10:00 a.m. in room E450A.
This year also featured a focus on building more robust models that can perform the same on different types of inputs, including noisy images and different types of acquisitions from different vendors, Hsu said.
As large image datasets are hard to come by, there's also been growing interest and research activity involving federated learning, a method of training algorithms using data from different institutions without sharing the actual image data. In this approach, models trained at each site on their local data are then combined into a "meta-model," Siegel said.
The optimal way to implement federated learning is still an open question, however.
"Although compelling, [federated learning] has proven more elusive than originally thought, but is still a promising and appealing strategy," Siegel said.
Another important development was the publishing in April of a road map for foundational research on AI, produced as a summary of a 2018 workshop by the U.S. National Institute of Biomedical Imaging and Bioengineering (NIBIB) workshop on medical imaging in AI, Hsu said. The workshop was also cosponsored by the RSNA, the American College of Radiology (ACR), and the Academy for Radiology & Biomedical Imaging Research.
Hsu also pointed to efforts to build "end-to-end" models that perform both localization and risk assessment of pathology, such as a May paper in Nature Medicine that showed the potential for a deep-learning algorithm to accurately assess cancer risk on CT lung cancer screening studies.
"I think it just shows the promise of a lot of these techniques coming together to really improve diagnostic accuracy of these exams with a high false-positive rate," Hsu said. "My hope in seeing these types of works is that the technology is certainly maturing in these spaces, and now it's [about] how do we bridge the gap between what see on a technical and research validation to something that's more clinically validated."
Research continues to advance on exploring the benefits of radiomics and AI for predicting outcomes or selecting optimal treatment approaches, Hsu said. For example, a study published in July in Lancet Digital Health showed that a deep learning-based model and radiomic features could be used to individualize radiotherapy dose, Hsu said.
Organ segmentation also has substantially improved in 2019, Siegel said.
"This will be an enabling technology for many other algorithms yet to be developed and will have implications in visualization, quantification, and analysis," Siegel said. "A good deal of very interesting work has been performed and published at Mayo Clinic by [Dr. Bradley Erickson, PhD,] and colleagues."
Siegel also noted the potential of ensembles, a technique that combines multiple machine-learning algorithms that attempt to answer the same question. For example, a paper published in Radiology: Artificial Intelligence on November 20 described the use of ensembles in an RSNA Challenge for assessing bone age in children from hand radiographs, he said.
"As is the case with human second and additional opinions, the best combinations came from approaches that used different rather than similar strategies to create the machine-learning model," Siegel said. "A variant of ensembles is the use of multiple collaborative algorithms such as one that performs organ segmentation and finds the adrenal gland, working in conjunction with another algorithm that predicts the probability of malignancy using CT or MRI images once the organ has been segmented."
In other developments, 2019 has also been characterized by continuing and evolving efforts to create AI platforms to make it easier for users to buy and implement algorithms.
"More companies [are] getting into platform delivery, with the concept of a platform varying from merely a central index with access to multiple AI algorithms, up to a platform with a single viewer that allows interaction with the AI algorithms themselves to provide real-time collaboration/fine tuning of results along with a human observer," Siegel said.
Due to the scarcity of high-quality datasets available for training algorithms, many AI projects wind up using very similar image data. As a result, there's been a growing awareness in 2019 of the need to keep the data "family tree" in check to avoid winding up with multiple copies of effectively similar projects and similar data, Chen said. This is also important if an image that was used in an earlier project has received an updated annotation of a cancer diagnosis.
"Where the data is being modified in the tree of lineages is now beginning to be paid attention to," Chen said.
Extracting textual information
Another important area of research and potential clinical implementation in 2019 is the use of deep learning to extract information from radiology reports or electronic medical records (EMRs), Siegel said. For example, a recent paper in Radiology: Artificial Intelligence described the benefits of neural networks for organ-level classification of pathology reports, suggesting that these networks could be feasibly used to incorporate surgical pathology monitoring into an automated imaging recommendation follow-up engine.
Also, a paper published November 16 in the Journal of the American College of Radiology found that natural language processing could reliably automate the identification of incidental pulmonary nodules in radiology reports, Chen said.
"I think a theme that we're learning throughout 2019 is that the most interesting work that's being done is not the science that's being done or how the algorithms have been built -- it's how you use it," Chen said. "And they are being used in very interesting ways and very practical ways."