How do radiology residents affect mammography reads?

2015 11 09 18 56 53 709 Training Return Button 200

Learning to interpret screening mammography exams is a key part of any radiology resident's training. But could their learning curve negatively affect patients, due to increased recall rates that result in false positives? The answer is yes, if attending radiologists aren't careful, according to a new study published in the Journal of the American College of Radiology.

Little has been published on whether radiology trainees affect the performance of interpreting physicians in screening mammography -- or any other subspecialty, for that matter, wrote lead author Dr. Jeffrey Hawley and colleagues. So Hawley's team investigated the effect of residents' involvement in mammography interpretation and subsequent diagnostic outcomes (JACR, February 26, 2016).

"Previous data had been published suggesting that working with trainees increases the mammography recall rate, but the question of whether these increased recalls translate into finding more cancers hadn't been addressed," Hawley, an assistant professor at Ohio State University, told AuntMinnie.com. "When we conducted the study, we wondered if an increase in callbacks might result in more cancers detected, but that's not how it worked out."

Reviewing recalls

The study included nearly 48,000 mammograms interpreted between January 2011 and December 2013. Six dedicated breast imaging attending radiologists read the exams, either alone or with a trainee. Readers 1, 2, and 3 had fellowship training in breast imaging and had just finished the fellowship at the beginning of the study; readers 4 and 5 had fellowship training in breast imaging and had been reading for four years; and reader 6 did not have dedicated fellowship training but had 28 years of experience in breast imaging.

Dr. Jeffrey Hawley from Ohio State University.Dr. Jeffrey Hawley from Ohio State University.

The trainees were second- to fourth-year radiology residents, breast imaging fellows, or fellows from other radiology subspecialties on breast imaging rotations. Trainees made an initial draft interpretation using dictation software and marked areas of the images they thought needed further evaluation. Attending radiologists and trainees then reviewed each of the exams in batch interpretation sessions.

A total of 28,283 exams were interpreted by attending radiologists alone, while 19,631 were interpreted by an attending in conjunction with a trainee.

The overall recall rate for attending radiologists reading alone was 14.7%, while the rate for exams read in conjunction with a trainee was 18%; four of the six attending radiologist readers had statistically significant increases in recall rate with trainee participation.

These recall rates are well above the suggested range of 5% to 12%, which may be due to the fact that three of the attending radiologists were just out of fellowship training at the beginning of the study period, Hawley and colleagues noted.

"I do think that having three of the attendings just out of fellowship at the beginning of the study period contributed to the rate," he told AuntMinnie.com. "Anecdotally, I've heard lectures stating it takes several years, about five, for a dedicated breast imager after fellowship to get down to and maintain a final level of recall, and from personal experience I would say that is fairly accurate."

Patients with dense breast tissue made up the majority of recalls by radiologists who were interpreting with trainees, the researchers found.

Attending radiologists reading alone found 161 cancers, while attending radiologists and trainees reading together found 103 cancers. The overall cancer detection rate for attending radiologists reading alone was 5.7 cancers per 1,000 exams; for attending radiologists reading with trainees, the rate was 5.2 cancers per 1,000 exams. This difference was not statistically significant.

Why the high recall rates? Trainee participation in screening mammography interpretation may introduce several biases, according to the authors:

  • "Alliterative bias," where an attending radiologist is swayed by a trainee's oral or written report before or during interpretation
  • "Framing bias," where the attending radiologist's reading is skewed in a particular direction, such as recalling the patient, by the trainee's marks on the exam
  • "Anchoring bias," where the attending radiologist fails to adjust an initial impression of the exam -- presented by the trainee -- when confronted by the study findings themselves

Taking stock

In any case, it's up to attending radiologists to manage resident input for mammography interpretation, according to Hawley and colleagues.

"When working with trainees, faculty members should not place unwarranted confidence in their interpretations and should apply the same standard of review as they would when reading cases by themselves," the group wrote.

Also, because trainee involvement in screening mammography interpretation may increase false positives, attending radiologists should consider strategies to minimize this. These strategies could include attending radiologists interpreting the exam alone before they read trainees' reports, and turning off suspicious area markers trainees may have placed on the image during initial review.

"We should be aware of the potential for bias and take stock of how we interact with residents," Hawley told AuntMinnie.com. "We need to do our best to minimize errors when it comes to mammography interpretation."

Page 1 of 569
Next Page