Machine learning comes to breast cancer risk assessment

Monday, November 28 | 9:00 a.m.-9:10 a.m. | RC215-03 | Arie Crown Theater
Can machine learning help clinicians better assess parenchymal patterns for breast cancer risk? It appears so, according to this study being discussed on Monday morning.

A team led by Dr. Hui Li, PhD, of the University of Chicago evaluated whether machine learning can help evaluate breast cancer risk convolutional neural networks (CNNs) that directly analyze mammography images, instead of measuring breast density and parenchymal textural features. The research included 456 digital mammography cases from two datasets of high-risk women: 53 BRCA1- and BRCA2-positive patients, 75 unilateral cancer patients, and 328 low-risk patients.

The researchers compared the use of direct image features, which were extracted using pretrained CNN algorithms, and the use of features extracted from radiographic texture analysis. Li and colleagues used area under the curve (AUC) values to compare the two techniques.

They found that features extracted by the CNNs and those extracted from radiographic texture analysis were comparable in performance when it came to distinguishing between BRCA carriers and low-risk women, with AUC measures of 0.83 for CNNs and 0.82 for texture analysis. But CNNs performed better than texture analysis when it came to distinguishing between women with unilateral cancer and those at low risk, with AUC values of 0.82 and 0.73, respectively.

What does this mean for breast cancer risk assessment?

"Deep learning with CNNs appears to be able to extract textural characteristics directly from FFDMs [full-field digital mammograms] as well as, or better than, conventional texture analysis in the task of distinguishing between cancer risk populations," Li and colleagues concluded.

Page 1 of 603
Next Page