Can deep-learning CAD outperform mammographers?

By Erik L. Ridley, AuntMinnie.com staff writer

March 23, 2017 -- Can computer-aided detection (CAD) software developed with deep-learning technology perform better than mammographers? Yes it can, at least for the task of differentiating between benign and malignant calcifications, according to a team of researchers from California.

In a series of studies presented at ECR 2017 earlier this month and at RSNA 2016, researchers led by Dr. Alyssa Watanabe of the University of Southern California (USC) Keck School of Medicine shared how deep learning can yield higher sensitivity and specificity for both breast masses and calcifications, compared with traditional mammography CAD techniques, even identifying cancer in some cases years before radiologists did.

They also reported that the deep learning-trained CAD software could differentiate benign and malignant breast calcifications better than radiologists could on their own, offering the promise of sharply reducing the number of negative biopsies.

One thing's for sure: Artificial intelligence (AI) is going to change the future of mammography, Watanabe said in her talk at ECR 2017.

"This deep-learning CAD [software] is already superior to the radiologist in classification of malignant and benign calcifications," she said.

Deep learning

A type of machine learning, deep learning is the most popular form of AI and is used today in speech recognition technology, self-driving cars, and even by Netflix to recommend your next movie, Watanabe said. In radiology applications, the machine learns to recognize different features on images after biopsy-proven cases are entered into the algorithm. It can then predict probability for malignancy, she said.

The research group at USC has been assessing the performance of qCAD, a quantitative CAD software application developed by CAD software firm CureMetrix using deep learning. The group also includes Dr. William Bradley, PhD, from the University of California, San Diego School of Medicine, as well as employees from CureMetrix.

The CAD software makes use of a "random forest" technique, in which multiple deep-learning classifiers and astrophysics codes based on astrophysics equations are assembled as an ensemble of decision trees in a random forest framework. The mammogram data are cycled multiple times through the "forest" so that all of the decision trees are encountered, Watanabe said.

After processing the mammograms, the CAD software's mathematical classifiers predict the probability of malignancy for the calcifications. The software summarizes these mathematical predictions as a Q score; higher Q scores indicate a greater likelihood of malignancy.

Before and after training the software with deep learning on 187 biopsy-proven cases, the researchers tested the CAD algorithm on 7,232 digital 2D screening mammograms and generated receiver operating characteristic (ROC) curves to assess its performance.

Prior to being trained with deep learning, the CAD algorithm had an area under the curve (AUC) of 0.884 for differentiating microcalcifications -- an AUC of 1 indicates perfect accuracy. Adding deep learning increased the AUC to 0.94, and subsequent refinements to the code that incorporated physics-based methods have since raised the AUC to 0.968, according to the group.

That's "quite outstanding and outperforms the CAD that's currently commercially available [based on published data submitted to the U.S. Food and Drug Administration]," Watanabe said.

By enabling the objective assessment of any changes seen on sequential mammograms, the software's quantitative scoring may also enhance radiologists' accuracy, she said.

Early breast cancer was flagged by the CAD software
Early breast cancer was flagged by the CAD software three years prior to biopsy and showed a progressive increase in quantitative score over the three years. All images courtesy of Dr. Alyssa Watanabe.

In other results, the researchers found that progressive deep-learning training continued to improve the performance of the CAD software over time for breast masses.

Progressive machine learning also improved the software
Progressive machine learning also improved the software's accuracy for breast masses, as demonstrated by increasingly higher areas under the ROC curve after training.

"As more and more cases of biopsy-proven breast cancers and benign lesions are entered into the data bank, the CAD 'learned' to distinguish between these lesions with greater and greater accuracy," the authors wrote in their poster at ECR 2017. "The potential to enter massive amounts of ground truth cases could lead to greater and greater accuracy in flagging cancers and not flagging benign lesions such as fat necrosis."

Avoiding benign biopsies

While almost 2% of screening mammograms result in biopsy, up to 80% of those biopsies are benign. Decreasing the number of negative breast biopsies would benefit patients and be cost-effective; $4 billion is spent each year in the U.S. on mammography false positives, Watanabe said.

To see if the deep learning-trained CAD software could help avoid some of these unnecessary biopsies, the researchers used an enriched dataset of 391 mammograms with biopsy-proven calcifications (302 benign and 89 malignant) provided by two institutions: a community-based imaging practice and an academic radiology department. They then compared the CAD software's performance with the biopsy recommendations previously provided by the fellowship-trained breast imaging radiologists at the two sites.

At the first institution, the positive predictive value of breast biopsy could be increased from 32% to 56% with the use of CAD, she said. Furthermore, up to 63% of benign breast biopsies could potentially have been avoided if the CAD software's predictions were used.

Performance of CAD software vs. radiologists by false-positive rates
  Radiologists CAD software (at 100% sensitivity threshold for cancer detection) Potential reduction in breast biopsies from use of CAD software
Academic radiology department 80% 35% 57%

"Over half of the breast biopsies that were recommended and performed by the academic radiologists could have been eliminated if the CAD had been utilized for the predictions," Watanabe said. "So, in other words, this CAD [software] is outperforming academic radiologists in recommendations for biopsying calcifications."

False positive for radiologist and true negative for qCAD
False positive for radiologist and true negative for qCAD. Fine linear branching and rim calcifications measuring 7 cm with regional distribution in the middle lateral region: path-proven fat necrosis.

With deep-learning training, the CAD software can recognize and eliminate benign lesions, she said.

"This dramatically reduces most false markings, which are a nuisance and distraction for mammographers," she said.

Deep learning can provide first read of chest x-rays
A computer-aided detection system developed using deep-learning technology can perform an initial review of chest x-rays, helping radiologists to prioritize...
Virtual radiologist stands ready to answer questions
How convenient would it be to have access in the palm of your hand to an on-call radiologist? Researchers hope to achieve that goal with a chatbot app...
Neural networks read mammograms as well as radiologists
VIENNA - Deep artificial neural networks can read mammograms as well as radiologists, according to research from Switzerland presented at ECR 2017 on...
Will AI soon put radiologists out of a job?
Are radiologists really "wasted protoplasms" -- as allegedly described recently by a CEO of a medical imaging start-up company -- who can be easily replaced...
Study raises new doubts about value of breast CAD
Radiologists who used computer-aided detection (CAD) software to read screening mammography exams had no better accuracy than those who didn't, according...

Copyright © 2017 AuntMinnie.com

Last Updated bc 5/1/2017 7:46:54 AM

8 comments so far ...
3/23/2017 6:17:33 PM
TradRads123
THIS IS NOT A THREAD ABOUT AI TAKING THE JOBS OF RADIOLOGISTS (these technologies don't do that anyway).
However, it seems like everyday there is some new study out there that shows that AI is better than radiologists at some narrowly defined binary task (i.e. this article today about breast calcs, yesterday it was vertebral fractures: http://pubs.rsna.org/doi/...1148/radiol.2017162100
 
However, for those of you in management positions in groups/admins in hospitals, could you see yourself implementing these technologies? If so, how? What would make you feel forced/interested in purchasing/implementing them?
 
For example, the vertebral compression fracture scheme just published in Radiology-- I can't see anyone actually forking over money to use that thing (everyone thinks they're probably better than the computer, and it's not clear how it really helps patients or the bottom line of hospitals, insurers, or radiologists).
 
However, I could see an insurer trying to encourage people to implement the Breast calcification AI scheme as it supposedly reduces the number of negative biopsies.
 
Thoughts? Basically what I'm trying to say is that I do feel that AI products that complement what we do will be/already are available, but will anyone use them?

3/23/2017 6:45:16 PM
hey
As long as the Radiologist is liable for the missed breast cancer, I fail to see how implementing such a product will save us any time or allow the Dept to hire fewer Rads.  Its accuracy would have to be greater than 99% (highly dubious in the year 2017).

3/24/2017 2:55:08 AM
The Wolverine
Is there money to be made for anyone involved other than the vendor selling the product? The answer to that question will determine the rise or fall of any such technology.

3/24/2017 2:56:32 AM
Jimboboy
This argument that radiologist job is secure just because one can be sued, don't you find that farcical? In essence, you're saying that we can't be replaced because only we can be blamed for our errors.

But why couldn't the AI vendor be sued? Or the health system that uses AI?

Also, the threshold for accuracy is not 99%. The accuracy AI needs to beat is OUR accuracy, whatever that number is. Once AI is as accurate as we are, rationale to replace is hard to argue against.

(not a perfect analogy but thematically similar: When you're getting chased by a tiger, you don't have to be the fastest. You just have to be faster than the slowest guy there.)

Another thing to note, AI is not traditional CAD. It's capabilities are generalizable beyond breast calc. Keep your ears to the ground.

3/24/2017 3:09:49 AM
Jimboboy
Yes Wolverine,

These are the people who benefit other than vendors:

Hospital administrations (save cost, increase thoroughput)
Radiology facility owners (same)
Govt health adminstrators (SAVE COST through economies of scale)

That is, any decision makers will benefit. The ones who can affect the transition to AI.

One can also argue that the general public will benefit through improved care access and cost savings. Perhaps through consistency (rad reads are horribly inconsistent dependent on who reads) and yes, accuracy.

Only people who do not benefit are radiologists in the trenches.