'Mundane' AI poised to transform radiology

2018 02 01 00 09 8591 Erickson Bradley 400 Thumb

Deep learning may be the most publicized form of artificial intelligence (AI) in radiology, but a mundane form of AI may soon be more prominent, according to a presentation on May 5 at AuntMinnie.com's Spring 2021 Virtual Conference: Advances in AI.

In his talk on trust of deep learning in radiology, Dr. Bradley Erickson, PhD, from the Mayo Clinic, predicted that robotic process automation -- a business automation technique now used broadly in nonhealthcare industries -- will have a bigger effect on the practice of radiology and medicine over the next five years than deep learning.

"That's because the payback and the value to the healthcare system is so much greater and so much more apparent," he said.

For example, robotic process automation will enable AI algorithms to be automatically applied to specific imaging studies based on predefined criteria, as well as facilitate training of algorithms, according to Erickson.

Current AI applications

  • Regression: predicting a continuous variable from inputs, such as predicting age from a hand radiograph
  • Segmentation (measurement): identifying pixels that are part of a structure of interest, identifying which pixels are abnormal within an identified structure, and labeling each image pixel with its "type"
  • Classification (diagnosis): predicting the nature of a group of pixels, such as tumor vs. normal or malignant vs. benign. Can also incorporate nonpixel data
  • Workflow and efficiency: reducing dose and acquisition time

There's too much hype on radiologists being replaced by computers, Erickson noted. But there's also a lot more information in images that computers can extract, such as quantitative values that are laborious and tedious to routinely acquire today, as well as unseen information such as molecular markers.

"I think it's key that we engage AI scientists to ensure that AI implementation is optimal for the care of patients, and they're not just IT projects," he said.

Although the focus on AI in radiology has initially largely been on classification tasks, predictive modeling will likely be an increasing area of emphasis. These algorithms could predict, for example, the risk of a certain disease based on their current images, according to Erickson.

"I think that's really where a lot of AI's value is likely to be unlocked, not so much in doing diagnosis like we do now, but more into the predictive modeling based on not just images, but all of the clinical information put together -- something that's relatively hard for humans to integrate," Erickson said.

Trusting AI

Can radiologists trust AI algorithms? Saliency maps are extremely valuable, as they give you an idea as to what pixels are used by the algorithm to make a prediction. But they can also create more questions and are far from sufficient for completely understanding what a convolutional neural network is using to make its prediction, according to Erickson.

Dr. Bradley Erickson, PhD, from the Mayo Clinic.Dr. Bradley Erickson, PhD, from the Mayo Clinic.

It's usually pretty easy to trust AI when the focus of the task is improving efficiency, such as image segmentation, for example, Erickson said.

"However, for tasks where AI finds something not visible or predictive, we now have to move to something like 'trust, but verify,'" Erickson said. "So you can run it on a number of cases, but then you need to go back and take a look at, were those predictions or classifications correct over a larger population of people."

Training data can be another source of uncertainty. Data needs to be curated and annotated, and the quality of curation and annotation will have a significant effect on results, he said. There also needs to be enough data to ensure that the deep-learning algorithm won't zero in on some spurious association to make predictions.

Data privacy and bias resistance are also important issues.

"We have to be very attentive to the data that's used and how robust the algorithm is to some of the variations that may be seen in clinical practice," Erickson said.

When considering whether to trust data sources used in AI, it's important to weigh whether the training data was sufficiently diverse and also if the training data matches your local patient population, Erickson said.

Monitoring performance

Another concern is "drift" in imaging devices over time, due, for example, to new image reconstruction kernels or implementation of a low-dose imaging program.

"All of those almost certainly will produce some change in the performance on your AI tools that were trained on a different type of image," Erickson said. "So you have to have some method of monitoring that 'drift' and assessing performance changes when software upgrades occur."

As a result, it's critical for doctors to learn about AI. The technology has incredible potential to improve patient care, but it also has great potential to harm patients, he said.

"AI is incredibly powerful, but with power comes great responsibility," he said. "In some cases we may not understand the power. This may be subtle textures or complex interplay of features that we can't fully comprehend, and the features and predictions may lead to unintended outcomes. And that's where we need to be very careful."

Integrating algorithms

Currently AI algorithms are trained to solve specific problems, leading to lots of products that solve very narrow tasks. An intracranial hemorrhage detector will not, for example, also tell you that there's a nasal bone fracture, Erickson noted.

"What we really need is a platform that can integrate them all or have some way so that we don't have to look at 100 different outputs when we read a head CT," he said. "And that's going to be the challenge, how do we take all of these outputs and put them nicely into the PACS?"

The Integrating the Healthcare Enterprise (IHE) initiative has begun to tackle this problem with work on an AI Results (AIR) profile, Erickson said.

With the expectation that there will be an ecosystem of AI tools in the clinical environment, it's undesirable to have each radiologist decide and execute each tool that they want for a given study, Erickson said.

"Furthermore, the information required by the AI tools will become more and more complex," he said. "Just feeding it pixels is not adequate; having nonpixel data, [such as] EMR data like age or gender, perhaps lab results like the white blood count, will all improve the AI's ability to make a diagnosis."

Robotic process automation

Enter robotic process automation, which has experienced significant adoption in nonhealthcare industries in areas such as manufacturing. With this approach, predefined criteria would be used to determine the AI tools that will be applied to each imaging study. Those would then be integrated into existing informatics standards such as AIR, Erickson said.

"I think over the long term that's going to be the most cost-effective [approach]," he said.

Robotic process automation will automate the aspects that don't need to be done by humans, letting humans focus on what they need to focus on, he said. In addition, it can also reliably create data for training of AI algorithms.

"I think this is also going to accelerate our ability to develop AI tools, because the data collection is going to be consistent, it's going to be robust," he said. "And the same way it collects data for training purposes will also then be used in the clinical implementation, so the challenge of 'drift' is likely to be lower."

Page 1 of 364
Next Page