Patient features have most impact on mammo reads

2015 08 13 15 33 40 354 Jackson Sara 175

Patient features such as breast density, prior mammography, and the presence of symptoms such as a lump have more effect on the interpretive accuracy of diagnostic mammograms than radiologist characteristics, according to a new study published in the American Journal of Roentgenology.

But certain radiologist characteristics do help, including affiliation with an academic institution, years of interpretation experience, and exam reading volume, researchers from the University of Washington in Seattle found.

"The overall accuracy of diagnostic mammography is high," lead author Dr. Sara Jackson and colleagues wrote. "We did not identify any radiologist characteristic associated with false-positive rates, but academically affiliated radiologists had greater sensitivity."

Academic matters

Jackson's team conducted the study because, although patient and radiologist characteristics associated with the accuracy of screening mammography interpretation have been thoroughly studied, those that might influence the accuracy of diagnostic mammography interpretation have not (AJR, August 2015, Vol. 205:2, pp. 456-463).

Dr. Sara Jackson from the University of Washington.Dr. Sara Jackson from the University of Washington.

The group expected factors such as radiologists' experience and volume of mammograms interpreted per year to be associated with better interpretive performance. But the fact that many patient characteristics were associated with sensitivity and false-positive rates did surprise the group.

"Our results may indicate that the variability in interpretation performance is primarily due to patient and/or breast tissue characteristics," Jackson told AuntMinnie.com via email.

For the research, Jackson and colleagues included data from 244 radiologists who had performed more than 274,000 diagnostic mammograms between January 1998 and December 2008, either as follow-up for an abnormal mammogram or to evaluate a breast symptom such as a lump or nipple discharge.

Almost 30% of the 244 radiologists were women, and 42% of the diagnostic mammograms included in the study were interpreted by female radiologists. About 20% of the radiologists included in the study were connected to an academic institution, and they interpreted 32% of the additional evaluations of abnormal mammograms and 37% of those exams intended to evaluate a breast problem.

Jackson's team found that diagnostic mammograms to evaluate an abnormal screening result had a false-positive rate of 11.9% and a sensitivity of 90.2%. Those exams performed to evaluate a breast problem had a false-positive rate of 7.6% and a sensitivity rate of 83.9%.

For those diagnostic mammograms performed after an abnormal screening exam, higher false-positive rates were associated with a radiologist's fellowship training. Higher sensitivity rates were associated with female radiologists, academic affiliation, fellowship training, having 10 to 19 years of mammography interpretation experience, a greater percentage of time spent in breast imaging, and greater screening and diagnostic interpretation volume.

For those diagnostic mammograms performed to evaluate a breast problem, higher false-positive rates were associated with younger radiologist age, female radiologists, academic affiliation, and fewer years interpreting mammograms. Higher sensitivity rates were associated with female radiologists, academic affiliation, fellowship training, having 10 to 19 years of mammography interpretation experience, and having never been named in a malpractice suit.

But once Jackson's team adjusted the data for radiologist characteristics only and for both patient and radiologist characteristics, the effect of radiologist characteristics fell away -- with one exception.

"When we estimated full models with all patient and radiologist characteristics, except for academic affiliation, we did not see a meaningful change in the odds ratios of other radiologist characteristics for any outcome," the team wrote.

Practice makes perfect?

Radiologists affiliated with academic institutions may just have access to more -- and more varied -- diagnostic mammograms, according to Jackson.

"It makes sense that radiologists who go into academic practice, work in teaching environments, who are more likely to be fellowship trained in breast imaging, and are more likely to receive referrals, including complex cases, from outside institutions would have greater cancer detection rates than those in nonacademic practice," she said.

So where does this leave other radiologists? Perhaps similar mammography interpretation curriculum that academic radiologists receive could be made available to all radiologists by incorporating it into residency and fellowship training and also adding it to the 15 hours of continuing medical education in breast imaging per 36 months that all radiologists who interpret mammograms are required by the Mammography Quality Standards Act to take, Jackson's team suggested.

Academic radiologists could help, too.

"Academic radiologists' support in improving systems and training for all radiologists who interpret breast imaging could improve the already high accuracy of diagnostic mammography in the United States," the researchers wrote.

Finally, radiologists can review their own performance statistics, Jackson said.

"Providing radiologists with their own interpretive performance data would allow them to better understand their own accuracy, and they could potentially review specific cases in which the clinical outcome was not in line with their interpretation," she said.

Page 1 of 569
Next Page