Radiologists may be better off taking hands-on approach to 3D workflow

The growing size of datasets produced by multidetector-row CT (MDCT) and MRI scanners has led to a crisis in radiology workflow that is being addressed to some extent by 3D image processing, according to Dr. Eliot Siegel, chief of imaging for the VA Maryland Healthcare System in Baltimore. While there are several possible models for handling 3D data, the most effective may be one that has radiologists taking more control of image processing tasks.

Siegel addressed the topic of 3D data overload and models for dealing with it using advanced visualization at the PACS 2007 conference in San Antonio, sponsored by the University of Rochester School of Medicine & Dentistry in Rochester, NY.

Among other effects, the current surge in images requires additional reconstruction time from the scanner. Today's systems may take up to five to 10 minutes to perform simple axial image generation from the raw dataset, and even more for multiplanar reconstruction, according to Siegel.

"It may take 10 minutes or more to send images from the CT scanner to the archive," he said. "At our facility, this has resulted in major delays since the reconstruction and transfer are beginning, in some cases, to exceed patient examination times."

Processing models

Institutions can choose among three different workflow models for their 3D processing needs.

In a traditional model that employs image reconstruction at the CT scanner itself, the patient is scanned and technologists create multiplanar and 3D reconstructions according to preset protocols or protocols at the discretion of the technologist, Siegel said. Technologists can bill for studies they've performed, though they need a radiologist or physician to determine whether multiplanar or 3D reconstructions are warranted.

This model has the least complex workflow and has the advantage of allowing for the technologist to immediately generate the bill. However, it has a number of significant disadvantages, Siegel said.

"This is a model that's really probably going to go away," he said.

For example, the traditional approach wastes technologists' time by taking time away from the patient and the image acquisition process, he said. In addition, it requires too much storage space, and the technologist might not know precisely which exact planes the radiologist or the clinician wants.

"This model, of reconstruction at the CT scanner and sending the images to the PACS ... from the scanner, is sort of a vestige of a film-like model, where the PACS ... just becomes a static window to display images generated by the CT scanner," he said. "I think we're going to be changing to an idea where the technologist just sends the images, (and) creates the volumetric model that can be interacted with later, either by a 3D laboratory or by the radiologist or clinicians themselves."

In a 3D lab approach, technologists scan, then send the images to a special multiplanar/3D/advanced processing workstation or lab for processing and analysis. In this case, a subspecialist technologist, or in some cases a physician, takes the datasets and performs the requisite processing, Siegel said.

In some cases, you might have a combination of the first two models, where some of the advanced processing is done at a 3D lab, and some of the more basic processing, such as coronal or sagittal images, can be done by the technologist, he said.

The 3D lab model has the advantage of requiring less CT technologist time and increasing throughput, Siegel said. It can also utilize a subspecialist who has the most expertise in reconstruction and does nothing but 3D or multiplanar processing.

Disadvantages include the cost of the specialist to perform the reconstruction and the need for additional space and workstations. In addition, the 3D lab technologist might still not be able to tailor the image precisely to what the radiologist and/or clinicians desire, Siegel said. It's also a more complex workflow.

This workflow can be smoothed out, however, by the Integrating the Healthcare Enterprise (IHE) initiative's postprocessing workflow integration profile. This profile offers a standardized means of scheduling, performing, and notifying of imaging processing and computer-aided detection (CAD), Siegel said.

In the third model, in which volumetric review of images is performed at the radiologist or clinician workstation, images are sent from the CT scanner to the server for thin-client applications or a workstation for thick-client applications.

"In this particular system, the technologist does no processing, there's no 3D lab, and the images are communicated directly to the radiologist or right to the clinician," Siegel said.

This approach offers the advantage of improved technologist efficiency, and eliminates the need in most cases for a 3D lab technologist, Siegel said. It also allows the radiologist or clinician to interact with the dataset, and to determine exactly how the reconstructions are performed. This information can then be incorporated into hanging protocols for the radiologists and clinicians, he said.

"I think that most people are going to be moving to a model where the majority of basic processing is done on the fly by the radiologist and the clinician," he said. "And certain types of advanced processing, (such as) surgical planning and other planning, is done by the CT technologist for a 3D lab."

As for disadvantages, the model may require additional work by the radiologist, or may not depict the image the best due to lack of competence and confidence of the radiologist, Siegel said. The radiologist also takes on the functions of the technologist with regards to sending reconstructed images to the PACS network and billing responsibility.

"These processes should be automated," he said.

The future

Siegel believes there will be a convergence of function between the regular PACS workstation and what is now considered an advanced 3D workstation.

"There's going to be more and more realization that the way to do things is going to be with server-side rendering, rather than having to transfer the entire dataset to a workstation before you can begin interacting with it," he said.

In addition, Siegel envisions that more systems will create central processing unit (CPU) and graphics processing unit (GPU) farms, allowing for sharing of processors on demand.

"So when you sign in on your workstation, there will be processors that will automatically be used to process the images," he said. "Different users will be able to take advantage of multiple GPUs that are located centrally."

Cluster and grid computing technologies could also confer speed advantages, Siegel said.

The next major phases will focus on decision-support tools such as CAD, cuing, and more sophisticated integration with the electronic medical record, he said.

"CAD is going to be used more and more to cue and show the radiologist some of the expected findings, rather than make the diagnosis," Siegel said. "I think what you're going to see in the future is more and more programs in which the microcalcifications or suspicious lesions end up being colorized or maybe enhanced. So what you get is an enhancement of the image as the image interpretation is done, rather than a model where the CAD program becomes a second reader."

Siegel also predicted that image navigation will improve substantially over the next few years, benefiting from alternative input devices and better navigation software. Clinical information will be also extracted and summarized using a digital dashboard equivalent, while tools such as image subtraction and image warping will be useful for comparing current and prior studies, he said.

"These future developments are going to create lots of challenges that are going to require the creativity and expertise of the medical imaging community," Siegel said.

By Erik L. Ridley
AuntMinnie.com staff writer
April 30, 2007

Related Reading

PACS, IT help meet changing ultrasound workflow requirements, April 23, 2007

Cardiac CT drives adoption of 3D visualization, March 26, 2007

Survey finds routine use of 3D visualization, December 18, 2006

3D reveals what axial images can't in small-bowel CT, November 15, 2006

Requirements for continued advancement of 3D applications, November 8, 2006

Copyright © 2007 AuntMinnie.com

Page 1 of 156
Next Page