More may not always be better in deep learning

Tuesday, November 28 | 11:00 a.m.-11:10 a.m. | SSG13-04 | Room S404AB
Having more layers in a convolutional neural network doesn't necessarily lead to better performance for medical imaging tasks, reports a group from Chicago.

Researchers in computer vision try to develop deep-learning models with very deep architecture, such as convolutional neural networks (CNNs) with 20, 40, or even 200 layers, according to Kenji Suzuki, PhD, of the Illinois Institute of Technology. But is deeper always better?

"Such a large model requires a large number of images to train the model," Suzuki said. "In our medical imaging field, we do not have such a luxury that multimillion images can be collected easily like the computer-vision community does."

As a result, the group sought to investigate how the depth of CNNs affects performance in detecting and classifying focal lesions in medical images. Three different tasks in medical imaging were evaluated: detecting lung nodules on CT, classifying lung nodules as benign or malignant on CT, and detecting polyps on CT colonography. After extensive experiments, the researchers found that CNNs with three or four convolutional layers were more effective than shallower architectures. However, using deeper architectures did not yield further improvements in performance, Suzuki said.

"CNNs with three or four convolutional layers might be adequate for focal lesion detection and classification in CT," he told AuntMinnie.com. "Thus, the use of an unnecessarily deeper deep-learning model should require an unnecessarily larger number of training cases and would result in inferior performance."

Get all the details by attending this Tuesday morning presentation.

Page 1 of 365
Next Page