QA application can bolster DR image quality, consistency

An automated quality analysis program offers potential for improving image quality, consistency, and technologist workflow in digital radiography (DR) of the chest, according to researchers from the University of Maryland Medical System in Baltimore.

"We think this [algorithm] can be applied in the future to help determine when a repeat study should be performed, to help to give technologists immediate feedback on their performance, and then track that performance over time and help them improve their image quality," said Dr. Tara Morgan. She presented the research at the 2008 RSNA meeting in Chicago.

While radiologists agree that image quality is critical, quality analysis by technologists for general radiography can be subjective, time consuming, and quite variable, with a wide range of retake rates, she said.

In a previous study examining more than 300,000 DR examinations, the most common reasons for retakes were positioning error (31.5%) and clipped anatomy (18.5%), Morgan said. That research also found a 15-fold variation in the retake rate percentage among the 14 technologists included in the study. Another study also has found wide variability among radiologists in their ratings of image quality.

Quality analysis variability can be addressed by methods such as utilizing a dedicated QA technologist or by performing retrospective peer review of technologists' studies, Morgan said. In an attempt to automate this QA process, the researchers sought to create software to perform automated and objective evaluation of image quality for digital chest radiography.

In the first phase of the study, the researchers selected three image quality parameters: patient positioning (clipped anatomy), exposure (image noise), and patient motion. The algorithm was then evaluated in comparison with an image science expert from Carestream Health of Rochester, NY, which developed the software. After conferring with multiple technologists, the expert created a scale of quality problems using a rating of 1 to 4 for each parameter. Each image used in the study was graded according to the scale, Morgan said.

The underexposure classifications consisted of 1 (minimal noise appearance), 2 (slight noise appearance), 3 (moderate noise appearance), and 4 (significant noise appearance). Motion, which was evaluated only on the lateral view, had classification ratings of 1 (minimal blur), 2 (slight blur), 3 (moderate blur), and 4 (significant blur). Classifications for clipped anatomy included 1 (entire thorax), 2 (lungs but not thorax), 3 (part of lung field clipped), and 4 (significant lung field clipped).

The software algorithm first determines if the study was a posteroanterior or a lateral radiograph, and then orients the image. In the second step, the software draws long axes and divides the chest into regions. It then analyzes variation in pixel intensities to determine overall image noise, Morgan said.

Edge detection is then performed to determine patient motion on the lateral view. Next, the software identifies features to allow for recognition of image cutoff, Morgan said. The algorithm then outputs the probability of a "defect" being present for each of the three parameters. It takes less than two seconds for the algorithm to process each image, she said.

The researchers evaluated the algorithm by determining if it could successfully predict whether a radiograph was judged by the expert to be either a low-quality study (1 or 2 classification) or high-quality study (3 or 4 classification) for each of the three parameters. Sensitivity and specificity scores were generated based on the ability of the program to match the expert reader's analysis.

To evaluate anatomy cutoff, 777 radiographs were used, 37% of which were determined by the expert to have an abnormality. Motion was tested utilizing 177 images, 33% of which were determined to have an abnormality. Underexposure was evaluated using 266 images, 7% of which were determined to have an abnormality. Receiver operator characteristics (ROC) curves were also created.

"The program performed the best for underexposure," she said.

QA algorithm results versus expert reader
Sensitivity Specificity
Underexposure 95% 82%
Clipped anatomy 85% 82%
Motion 84% 80%

Although the results are promising, it's difficult to evaluate the performance of the algorithm as the study didn't evaluate interobserver and intraobserver variability among technologists or radiologists, she said.

"We think the software may actually be able to outperform a group of radiologists since it can objectively and consistently evaluate these parameters," she said.

Computer QA software provides a number of advantages, including improved QA consistency and reproducibility, improved technologist productivity and workflow, enhanced education and training for technologists, and improved overall image quality, she said.

By Erik L. Ridley
AuntMinnie.com staff writer
December 22, 2008

Related Reading

Options abound for radiation therapy QC, August 19, 2008

PACS data-mining tool analyzes CR retake rates, August 5, 2008

Digital x-ray lowers retake rates in study from Pakistan, June 30, 2008

Prevention key for reducing CR, DR image artifacts, December 25, 2007

Developing a cost-effective image QA workflow with PACS, November 21, 2007

Copyright © 2008 AuntMinnie.com

Page 1 of 373
Next Page