"Despite numerous studies examining the patient, radiologist, and facility characteristics, variation in performance measures still exist, and perhaps it is due to the radiologic technologist obtaining the image," wrote lead author Louise Henderson, PhD, from the University of North Carolina at Chapel Hill, and colleagues.
Henderson's team found significant fluctuations in recall rate, cancer detection rate, sensitivity, and specificity for screening mammography, depending on the radiologic technologist who performed the imaging. For these factors, the variability existed regardless of whether film-screen or full-field digital mammography (FFDM) was used.
Influences on performance
A variety of factors can influence the performance of radiologists, including years of experience, interpretation mix (screening or diagnostic exams), postresidency training, and whether they pursued a breast imaging fellowship, Henderson told AuntMinnie.com. And, of course, patient age, breast tissue density, screening history, and family history affect mammography interpretation.
Louise Henderson, PhD, from the University of North Carolina at Chapel Hill.
"Patient, radiologist, and facility factors have been found to influence mammography interpretation performance, but there's been one piece of the puzzle missing -- and that's technologists," she said.
Henderson's group used Carolina Mammography Registry data from 1,003,276 screening mammograms acquired between 1994 and 2009 at 59 facilities. The mammograms were performed by 356 mammography technologists and interpreted by 372 radiologists; for each mammogram, a unique technologist code was collected (Acad Radiol, March 2015, Vol. 22:3, pp. 278-289).
The researchers evaluated interpretation performance measures -- recall rate, sensitivity, specificity, cancer detection rate, and positive predictive value -- and compared them with cancer outcomes collected from the state cancer registry and pathology reports.
For each performance measure, Henderson and colleagues used a statistical model to assess variability across technologists. The model accounted for technologist-specific and radiologist-specific random effects.
Of the 356 technologists, 343 performed film-screen exams, 51 performed digital exams, and 38 performed both. A total of 4,328 cancers were reported for film-screen mammography and 564 cancers for digital. The mean number of screening mammograms per year was 650.5 for technologists performing film-screen exams, 872 for digital exams, and 658 for both film-screen and digital exams.
The researchers measured variation by focusing on radiologist interpretation scores between the 25th and 75th percentiles, which eliminated outlying statistical data. A bigger gap in data values between those percentiles would indicate wider variation in radiologist performance and, thus, bigger influence by technologists.
Henderson and colleagues found a statistically significant effect from the technologists on recall rate, sensitivity, specificity, and cancer detection rate for both film-screen and digital exams. Regarding positive predictive value (PPV), they found variability by technologist for film-screen but not digital mammography.
|Technologist influence on mammography performance
|Recall rate per 100 exams
|Cancer detection rate per 1,000 exams
The researchers did not investigate specific technologist characteristics, but they posited that technologists' work experience, levels of training and education, and interactions with radiologists may affect the performance of screening mammography.
The team cited research that assessed mammography technologists' effectiveness in providing nonclinical interpretation of screening exams in a telemammography program. Results showed that technologists were sensitive but not specific to breast features and changes that could result in recalls (Journal of Digital Imaging, May 26, 2006, pp. 216-225).
In addition, another study evaluated technologists' ability to classify screening mammograms correctly as negative or requiring additional follow-up: The overall agreement between technologists and radiologists was 82%, Henderson and colleagues wrote (American Journal of Roentgenology, January 2003, Vol. 180:1, 253-256).
Going forward, the researchers hope to identify which particular technologist characteristics affect radiologists' interpretive performance, Henderson said.
"Is it how a particular technologist positions the patient? Do some take inherently better pictures? Is it how they interact with the radiologist? We're not sure," she said. "What we do know is that techs play a critical role in mammography screening."
Copyright © 2015 AuntMinnie.com