5 reasons why imaging AI is different from CAD

By Elad Walach, AuntMinnie.com contributing writer

October 29, 2018 -- Artificial intelligence (AI) in medical imaging is on the rise and already becoming part of the standard of care. But with all the hype, one must ask: Is next-generation imaging AI simply a new take on computer-aided detection/diagnosis (CAD)? Are we witnessing an evolution, or perhaps a revolution?

This article gives five reasons why next-generation imaging AI is fundamentally different from traditional CAD, and it will show how AI is an entirely new paradigm in computer-aided imaging. There are two components to this new paradigm: deep learning as the core technology, and an evolving workflow integration.

These two components are far from becoming a commodity, but AI solutions based on them will revolutionize the types of products made available to radiologists.

1. Next-gen AI solutions can be like a personal assistant.

Let's begin with a basic truism: The new generation of AI solutions is proving to be more accurate than traditional CAD (in some cases, by a huge margin). Several breakthroughs in AI technology, mainly the deep-learning revolution, the increase in medical AI research, and the opening of large annotated datasets, have led to significant improvements in the levels of accuracy in AI algorithms.

Elad Walach
Elad Walach, CEO of Aidoc.

Moreover, deep-learning technologies are designed to interpret large amounts of data, learn from mistakes, and improve over time. This makes them not only more accurate but also actually more robust to changing circumstances, including different types of scanners and diverse patient populations (provided that the training data are available).

There are even systems that have reached a performance level that almost matches that of a radiologist (albeit on very narrow use cases). We are reaching the point where new-generation AI systems are sensitive enough to detect minute image changes that are challenging to most human eyes.

In fact, the biggest leap of AI is actually in the improved specificity (reduced false-positive rate) of these solutions. This, in turn, allows a much smoother user experience. The result is that analysis speed is increased, time to treatment is accelerated, and diagnostic quality is improved.

Great strides in AI accuracy have been demonstrated recently by an ophthalmologist at the University of Iowa, who developed software that uses AI to detect diabetic retinopathy without a person interpreting the results. Now cleared by the U.S. Food and Drug Administration, IDx-DR exceeded all prespecified superiority end points in sensitivity, specificity, and imageability -- proving that, in theory, the accuracy of such systems could be on par with that of a trained radiologist.

2. Next-gen AI solutions have comprehensive capabilities.

There is a second dramatic benefit to companies that adopt deep learning as their core technology. Deep-learning systems are not only more accurate but also much more scalable. If a company builds the right infrastructure, it can utilize the same engine that detects intracranial hemorrhages to detect cervical spine fractures or pulmonary embolisms, resulting in broad coverage of radiology workflow.

In contrast, typical CAD systems tend to be limited to detecting or diagnosing a specific disease (e.g., lung or breast cancer). With such a narrow scope, designed to deliver just a single answer, and due to the underlying technology limitations, it is difficult for CAD systems to expand the way in which deep-learning solutions have.

Current AI developers are using machine-learning programs, powered by complex deep-learning pipelines with scalable, immersive diagnostic engines, to enable the detection of a broad range of pathologies. By feeding AI algorithms large volumes of data and teaching them to interpret and learn new concepts and rules, AI can evolve over time and improve medical diagnosis.

3. Next-gen AI solutions offer seamless workflow integration and "always-on" AI.

Traditional on-demand AI systems that are available today require physicians to actively request the intervention of the AI algorithm. Due to increased standardization, next-gen AI systems can have a much tighter workflow integration.

For instance, hospitals can now enjoy "always-on" AI, running continuously in the background and assisting in interpreting scans. These AI systems eliminate the need to click on anything to request a task. Instead, they sift quickly and automatically through each patient scan, pinpointing urgent findings and sparing radiologists from drowning in vast amounts of irrelevant data.

Next-gen AI imaging not only integrates easily into the radiologist's work environment -- including PACS software used predominantly by radiologists -- but also actually improves efficiency without altering the existing workflow.

Historically, some CAD systems have been criticized as being inefficient due to their design as standalone systems. Each CAD application included complex functionality methods and had to be installed on a dedicated workstation and, therefore, could not be integrated into the radiologist's workflow.

Even if they were not in a different workstation, most CAD systems required a dedicated workflow, encroaching on the already limited real estate that exists on a radiologist's workstation. One hospital actually showed me its "CAD basement," where many old computers intended for different CAD applications had been collecting dust for years.

The American College of Radiology Data Science Institute (ACR DSI) is pushing to set new industry standards for the practice of radiology. Due to the opening up of proprietary software and vendor communication (supported by the ACR DSI), it is now possible -- for the first time -- to deploy AI within the existing workflow environment.

4. Next-gen AI solutions offer sophisticated output.

In the past, the traditional CAD approach treated medical images as pictures intended solely for visual interpretation, using arrows and other markings applied to the image.

Since then, advanced systems are utilizing imaging-based AI to deliver complex output with new capabilities. For example, after reviewing a scanned image, IBM Watson's electronic health record-driven patient synopsis prioritization solution can pull medical history and present a comprehensive view of the patient's condition.

In the foreseeable future, machine-learning software could provide a preliminary diagnosis of diseases and determine the optimal next step in narrowing differential diagnosis. In the future, these new systems could, potentially, combine imaging with clinical and genomic data. Increased capabilities of such radiomics-based systems may enable drug efficacy prediction, allowing treatment optimization for each individual patient.

In parallel, AI systems may analyze medical literature to identify the most relevant references for the given case. The way is open to the truly intelligent clinical assistant.

5. Next-gen AI solutions provide value across the entire clinical workflow.

New integration standards are appearing and, with them, new ways AI can improve the radiologist's day-to-day workflow. In the past, traditional CAD systems provided only limited diagnostic support. For next-generation AI, this is no longer true.

For example, in prioritization, some next-gen AI solutions (CADt) have shown the ability to assist in prioritizing time-sensitive cases. Other systems can provide quantification of white-matter lesions.

AI can now help with scheduling, patient management, report generation, and workflow optimization. It can also help choose the right imaging approach for the specific clinical context and even reduce operator dependency.

The broader the workflow integration, the greater the potential. With new standards emerging, we're only now beginning to scratch the surface.

The new paradigm

As the field of radiology continues to evolve from traditional CAD systems to next-gen AI, it will soon be hard to imagine practicing radiology without the assistance of these new AI partners, which will take on a significant portion of the workload. They will, most likely, become a standard of care for diagnostic examinations in daily clinical work, which will help radiologists focus on diagnosis and communication with patients and other physicians.

What started as an incremental technological improvement morphed into a qualitative breakthrough. Now we have at our fingertips the ability to combine a wide variety of data sources, including diagnostic imaging, clinical information, and genetic data. Based on AI analysis of this diverse information, we can attain new services such as quantitative follow-up, risk assessment, and decision support in diagnostic and patient management processes.

AI systems can even help evaluate the efficacy of various drugs and other treatments. Imagine all these services seamlessly integrated into the radiologist's workflow. True, many of these advanced features are still on the drawing board rather than in common clinical practice, but with recent technological breakthroughs, the door has been opened and the ball is rolling.

Elad Walach is the co-founder and CEO of Aidoc, a healthcare AI start-up focused on using deep learning to relieve the bottleneck in medical image diagnosis. Walach began his career in the elite Israeli Defense Forces' Talpiot technology program. He served as a researcher in the Israeli Air Force's algorithmic division, where he raised through the ranks, reaching the position of algorithmic research leader. He led several teams focused on machine-learning and computer-vision projects from inception to execution. Walach holds a Bachelor of Science in mathematics and physics from the Hebrew University of Jerusalem and a Master of Science in computer science with a focus on deep learning from Tel Aviv University.

The comments and observations expressed are those of the author and do not necessarily reflect the opinions of AuntMinnie.com.

ASTRO: AI's rad therapy future is in predicting outcomes
SAN ANTONIO - Radiation oncology is already putting artificial intelligence (AI) to work by using the technology for automating mundane tasks. But AI's...
AI effective for assessing breast density
An artificial intelligence (AI) algorithm can assess breast density with 94% agreement with the classifications of experienced mammographers, according...
AI can prescreen head CT studies for urgent findings
A set of artificial intelligence (AI) algorithms can prescreen head CT scans for urgent findings such as intracranial hemorrhage and cranial fracture,...
DeepSpine AI enhances grading of lumbar spinal stenosis
An artificial intelligence (AI) algorithm called DeepSpine can assist radiologists in the time-consuming and difficult task of grading spinal stenosis...
3 ways to ensure AI algorithms are really working
Artificial intelligence (AI), deep learning, and related technologies have emerged as solutions that can really make a difference in almost every medical...

Copyright © 2018 AuntMinnie.com

Last Updated bc 10/29/2018 6:14:21 AM

19 comments so far ...
11/3/2018 6:02:37 AM
Andrew J. Worth
"deep-learning technologies are designed to ... improve over time."
Deep learning is adaptive, but that adaptation is not always for the good.  Beware the stability plasticity dilemma!
 
 

11/3/2018 7:42:06 AM
Dr.Sardonicus
This is typical of the proponents of AI.  
First the article is written by the CEO of an AI company. He will, definitely, inflate the capabilities.
(Aunt Minnie - was this paid??? It's almost an ad)
 
Quote from

There are even systems that have reached a performance level that almost matches that of a radiologist (albeit on very narrow use cases). We are reaching the point where new-generation AI systems are sensitive enough to detect minute image changes that are challenging to most human eyes.

Proof???
Things I have seen - in radiology - are badly done and unconvincing.
 
Quote from

In fact, the biggest leap of AI is actually in the improved specificity (reduced false-positive rate) of these solutions

proof, proof, proof?
FURTHER - yeah, I can EASILY reduce my false positive rate - simply by absorbing a greater malpractice liability - which you, Mr. CEO, do not have. 
 
Quote from

Great strides in AI accuracy have been demonstrated recently by an ophthalmologist at the University of Iowa, who developed software that uses AI to detect diabetic retinopathy without a person interpreting the results.

I actually agree with this. I have read the paper, and I don't know if I am qualified to criticize it closely, but, if they didn't lie, it seems that it is indeed a useful application of AI.
 
That said - this is a VERY focused application. Looking for signs of one disease in one closely defined type of image. FAR FAR from looking for any possible pathology in nay organ. 
 
 
 
Quote from

In contrast, typical CAD systems tend to be limited to detecting or diagnosing a specific disease (e.g., lung or breast cancer). With such a narrow scope, designed to deliver just a single answer, and due to the underlying technology limitations, it is difficult for CAD systems to expand the way in which deep-learning solutions have.

so what? Yeah - CAD is basically useless. So if you couldn't improve on it (or think you could) you wouldn't be writing. Also, what you are writing today is precisely the same sort of thing - inflated promises and all - that CAD companies produced years ago. Why should we believe you now? I am far more skeptical than I was years ago, because I sort of believed what they were saying at the time, and it was all bogus.
 
Again - quit projecting the future and show some proven results. 
 
 
4) sophisticated output. 
Sure - that is a real potential application. And I have heard this from MANY software vendors in the past. Get back to me when you can do it.
 
Reminds me of what is said about fusion energy "FREE energy from just water. Unlimited" WOW. That is very very true. Just as soon as we develop a reactor that can contain the plasma. Which may or may not occur. 
 
 
 
To his credit, he isn't going completely for the sexy "AI will read your scans better than humans". He does include some achievable projects, which probably would be much easier to complete than the pie-in-the-sky reading of scans. 
 
Now if someone could just get me some software to show me the pertinent data in a patient record, rather than paging blindly through all 700 pages to find something useful.
 
 

11/3/2018 8:12:13 AM
vonbraun
I read it and did not learn a thing. Show us the data. Maybe the company's cash burn rate is too high and it needs more capital.

11/3/2018 8:53:06 AM
dergon
Total hype piece.  As worthless as the internet bites it is printed on.

11/14/2018 11:02:46 AM
Elad Walach
Dear Dr. Sardonicus,

Thank you for those thoughtful comments. 

Indeed the piece was meant to focus on structuring the different aspects of next-generation AI, this is not paid for and is not meant to talk about the results of our specific solution or promote it. 

 

On that note, I'm actually more than happy to share some data.

I agree that providing evidence is critical - both to validate the system accuracy, as well as to validate the outcomes. 

 

I can propose the following ways to help alleviate your concerns: 

1. First and foremost, I'm more than happy to test out the accuracy of our solution in your institution. 

We can coordinate a demo where we can show the results of our analysis on your cases and you can judge for yourself. 

If you're willing, we are more than willing to put ourselves out there. 

 

2. It is true, that we are just now starting to publish the results in peer-reviewed literature, so in the meantime, I more than happy to share a few published abstracts with you. 

 

Those results show, for example, 96.2% sensitivity and 93.3% specificity for detection of intracranial hemorrhage and 93% sensitivity and 97% specificity for the detection of a variety of acute abdomen findings. 

The last abstract, for example, will be presented at RSNA, and you are more than welcome to stop by our booth (#6561) to check it out. 

 

3. Even more exciting, while I can't share this information in the public forum, I can share some results from prospective studies about potential outcomes - like reduction of outlier patients in the ED and outpatient scenarios.  

 

We are aware, this is still the beginning of the industry. However, we are firm believers that with diligence and good scientific work you'll see stronger and stronger results. I’m on LinkedIn and would be happy to connect to discuss any of the 3 above.