Are rads overconfident in their mammography skills?

By Kate Madden Yee, AuntMinnie.com staff writer

September 5, 2012 -- Radiologists perceive their mammography interpretation performance to be better than it actually is, according to a new study in the September issue of the American Journal of Roentgenology. And that just keeps them from actually improving, researchers from Washington state said.

Although much attention is now being paid to assessing physicians' clinical performance in order to guide quality improvement, little is known about how well physicians understand their own performance measures, according to lead study author Andrea Cook, PhD, and colleagues (AJR, September 2012, Vol. 199:3, pp. 695-702). Cook is part of the biostatistics unit of the Group Health Research Institute in Seattle.

Radiologists who interpret mammograms have received standard performance information since the 1990s, through a requirement in the Mammography Quality Standards Act (MQSA) that breast imaging facilities collect and review outcomes data for all mammograms they perform, according to Cook's team. But do radiologists take that information in?

"We recently published criteria for minimally acceptable interpretive performance with the idea that radiologists with poorer performance might want to seek additional training," study co-author Diana Miglioretti, PhD, told AuntMinnie.com. "There have also been 'ideal performance goals' published in the past. However, radiologists need to know their current performance in order to apply these criteria and determine if they might benefit from additional training."

The study included data gathered between 2005 and 2006 from 174 radiologists from six registries in the Breast Cancer Surveillance Consortium (BCSC) who had completed a mailed survey. The team compared radiologists' estimated and actual recall, false-positive, and cancer detection rates and positive predictive value of biopsy recommendation (PPV2) for screening mammography, as well as their ratings of their performance (lower than, similar to, or higher than that of their peers).

The six BCSC registries included data from San Francisco, North Carolina, New Mexico, New Hampshire, Vermont, and western Washington.

Cook's team found that although most radiologists accurately estimated their cancer detection and recall rates (74% and 78%, respectively), fewer accurately estimated their false-positive rate (19%) and PPV2 (26%). As compared with their peers:

  • 43% of the radiologists included in the study reported recall rates that were similar, while 31% reported lower rates.
  • 52% reported false-positive rates that were similar, while 33% reported lower rates.
  • 72% reported similar cancer detection rates, while 23% reported higher rates.
  • 72% reported similar PPV2 rates, while 38% reported higher rates.

"We weren't surprised that radiologists felt their performance was better than their peers -- I suspect most people believe they are good at their jobs," Miglioretti said. "However, we were surprised that radiologists were unable to accurately estimate their false-positive rates and PPV2. These are important measures for evaluating the benefits and harms of mammography. If the false-positive rate is too high, a lot of women are likely getting unnecessary additional workup. And if the PPV2 is too low, a lot of women are likely getting unnecessary biopsies that turn out to be benign."

Radiologist characteristics did not affect the accuracy of their performance estimations, except that radiologists who interpreted 1,000 or fewer mammograms annually were less accurate at estimating their recall rates, according to Cook and colleagues.

Although 96% of the radiologists who participated in the survey reported receiving audit reports, the data didn't seem to fully inform their sense of their own performance, they wrote.

"We had hypothesized that receipt of audit reports, clinical experience, and fellowship training all would improve radiologists' accuracy at estimating their own interpretative performance, but we found minimal evidence of this relationship," the team wrote. "Only having read a greater volume of mammograms had a positive effect on accurately estimating recall rate, and radiologists who more frequently used numbers or statistics when discussing mammographic results with patients were more accurate in estimating their cancer detection rate."

It's difficult for radiologists to actually improve their mammography interpretation performance if they don't have an accurate perception, Miglioretti concluded.

"If radiologists believe they are better at their jobs than they actually are, then there will be little motivation to get additional training or support to improve their performance," she told AuntMinnie.com. "If their false-positive rates are higher than they think they are, then I suspect they will be less likely to work toward adjusting their performance so they can reduce unnecessary workups. We need to understand which areas of our practice might benefit from improvement in order to be motivated to change."


Copyright © 2012 AuntMinnie.com
 

To read this and get access to all of the exclusive content on AuntMinnie.com create a free account or sign-in now.

Member Sign In:
MemberID or Email Address:  
Do you have a AuntMinnie.com password?
No, I want a free membership.
Yes, I have a password:  
Forgot your password?
Sign in using your social networking account:
Sign in using your social networking
account: