Study examines reasons for CR image rejections

A new research study is shedding light on why computed radiography (CR) images are rejected for image-quality reasons. A U.S. group found that while new software tools make it easier to track CR rejects, additional quality-control training is recommended before imaging facilities implement programs for reject analysis.

Digital x-ray technologies such as CR and digital radiography are becoming the standard of care at many hospitals and imaging centers, but few published studies have examined the performance of the systems from a quality perspective, according to a study published online before print on April 30 in the Journal of Digital Imaging. The lead author on the study was David Foos from the Clinical Applications Research Laboratory at Carestream Health of Rochester, NY; among the co-authors were Dr. Bruce Reiner and Dr. Eliot Siegel from the Maryland VA Healthcare System in Baltimore.

The researchers started with a database of 288,000 images collected with 20 CR readers (Kodak DirectView CR, Carestream). Of the readers, 16 were installed at a university hospital and collected 152,297 images over 435 consecutive days, while four CR systems installed at a community hospital collected 122,543 images over 275 consecutive days.

The number of rejected images was tracked using software that was activated on all 20 readers. With the software, radiologic technologists were required to enter the reason the image was rejected before another imaging study could be performed, with the results compiled in a database.

The researchers also developed customized workstations at each facility to download image records from the CR systems. The image records included information identifying the patient, CR device, body part, patient position, technologist ID, and reason the image was rejected, if applicable.

The researchers found that the rejection rate for CR studies at the university hospital was 4.4%, compared with 4.9% at the community hospital. Both facilities had rejection rates of 8% or higher for certain types of studies, primarily skull/facial bone, shoulder, hip, spine, pelvis, and chest studies conducted within the radiology department.

The most common reason for rejected images was the combination of patient-positioning error and anatomy cutoff; 45% of all images at the community hospital and 56% of images at the university hospital were rejected for this reason. Other reasons and rates for rejected images are shown in the table below:

Reason for rejection University hospital rejection rate Community hospital rejection rate
Positioning error/ anatomy cutoff 56% 45%
Improper exposure 13% 14%
Patient motion 7% 11%

The group noticed a number of interesting phenomena in analyzing the results that indicate the importance of educating and training technologists about the differences between digital x-ray techniques such as CR and screen-film analog radiography. For example, a number of images were rejected for what technologists called "underexposure," but further analysis indicated that some 30% of these images had exposure index (EI) levels that were within the range for normal exposure. This misclassification was caused by poor positioning techniques that put too much of the patient's abdomen in the field-of-view, the authors wrote.

"Upon visual inspection of these cases, it was discovered that the images were rendered by the image-processing software to be too bright, which is a classic characteristic of an underexposed image for a screen-film system," the authors wrote. "The poor positioning caused the image processing algorithms to render the images with lower than desired average density, which, in turn, resulted in the technologist interpreting the image as being underexposed."

In addition, the group found that chest studies conducted on portable equipment had a lower rejection rate (1%) than those performed on x-ray systems within the department (9%), despite the commonly accepted difficulties involved in portable imaging that might bump up the number of suboptimal studies.

According to the researchers, the reason for the discrepancy could be found in the centralized nature of some CR systems, with readers located at central areas rather than at the point of care. Technologists may not know until hours after they've captured an image that it is of suboptimal quality, at which point it would be too disruptive of workflow to perform a repeat study. A downside of this phenomenon is that suboptimal chest images may sometimes be released to the PACS network for interpretation, the authors noted.

The researchers concluded by stating that CR, PACS, and new digital dashboard technologies for monitoring departmental operations make it easier than ever to track rejection rates and their effect on overall quality in digital x-ray studies. However, the usefulness of such tools can only be maximized when technologists are educated in proper quality-control techniques to understand the differences between analog and digital x-ray.

"There is an opportunity to improve the completeness and accuracy of reject analysis for digital radiography systems through the standardization of data entry protocols and improved reporting and analysis methods," they wrote. "Accurate reject analysis provides the basis from which to develop targeted training programs and helps to mitigate the largest source of patient repeat exposures."

By Brian Casey
AuntMinnie.com staff writer
May 12, 2008

Related Reading

Studies examine digital methods for reducing pediatric x-ray dose, October 9, 2007

PACS data-mining technique tackles CR dose creep, July 30, 2007

DICOM-compliant displays aid CR/DR exposure control, July 17, 2007

CR/DR image quality: Issues and concerns, April 12, 2007

Strategies for reducing 'dose creep' in digital x-ray, April 11, 2007

Copyright © 2008 AuntMinnie.com

Page 1 of 374
Next Page