A team of researchers led by Heqing Zhang of Sichuan University in Chengdu, China, trained a variety of convolutional neural networks (CNNs) using 5,000 breast ultrasound exams, including 2,500 malignant and 2,500 benign cases. They then assessed the performance of the models on a separate test set of 1,007 images, which included 788 benign and 219 malignant cases.
The InceptionV3 model yielded the best results, producing an area under the curve (AUC) of 0.905. The difference between the other models based on VGG16, ResNet50, and VGG19 was statistically significant (p < 0.05).
The researchers then compared the InceptionV3 model with sonographers who had performed and interpreted 683 breast ultrasound exams, including 493 benign and 190 malignant cases. In this analysis, the deep-learning model had an AUC of 0.913, better than the AUC of 0.846 by the sonographers. This difference was also statistically significant (p < 0.05).
Copyright © 2020 AuntMinnie.com