3D volumetric imaging methods are now a common component of medical imaging across many imaging modalities. But relatively little is known about how human observers localize targets masked by noise and clutter as they scroll through a 3D image and how it compares with a similar task confined to a single 2D slice.
The current study aimed to compare components of reader performance in 3D images with 2D images, in which scrolling is not possible, to see if subjects are capable of integrating multiple slices into a localization response identifying a target of interest.
Researchers created images in the study that were simulations intended to approximate high-resolution CT imaging. The images were generated in 3D and viewed as 2D slices, and the test subjects in the experiment were able to freely inspect the images, including scrolling through 3D images.
The researchers found no evidence that the readers combined information over multiple sections of the image to localize a target that spans multiple sections of the image. They felt that the readers basically treated the 3D volumetric image as a stack of independent 2D images.
The findings warrant further investigation, but they support and help to explain the need for multiple views in 3D image reading, and they provide useful information for modeling observer performance in volumetric images, the authors concluded.
Copyright © 2021 AuntMinnie.com