October 1, 2013 -- Referring physicians asked to rate the usefulness of a set of sample radiology reports gave them relatively high marks, according to a new study published in the October American Journal of Roentgenology. But the referring physicians added that they would like to hear from radiologists more often about important findings.
Radiologists' reports are the primary way they communicate findings to both referring physicians and patients, and radiologists have worked hard to improve their reporting practices, particularly through peer review, wrote lead author Dr. Andrew Gunn and colleagues from Massachusetts General Hospital.
The most common type of radiology peer review is to assess how well two radiologists blinded to each other's work agree on a case -- whether an abnormality was perceived, interpreted, and reported correctly.
But there's still room for improvement in radiology reports, especially because radiologists aren't the end users of the finished product: referring physicians are. A formal peer-review process that included structured feedback from these doctors could improve reporting practices even more, Gunn's team hypothesized (AJR, October 2013, Vol. 201:4, pp. 853-857).
"Every radiologist has received a phone call or an email from a referring physician saying there's something wrong with the report, like a typo, or even the clinical question wasn't answered," Gunn told AuntMinnie.com. "We give each other radiologist-to-radiologist feedback, but what's missing is input from referring physicians and patients."
For the study, the researchers invited five referring physicians -- all primary care providers -- to participate as reviewers. Reports from abdominal CT, chest CT, brain MRI, and abdominal ultrasound exams acquired in September 2010 were considered appropriate for review if the indication was "abdominal pain," "shortness of breath," "headache," or "pain," respectively. They did not include reports that were from normal or follow-up exams.
Gunn and colleagues then selected at random 12 reports from each of these categories (for a total of 48) and distributed them to the five reviewing physicians, along with the clinical scenario and an evaluation form for each exam.
On a scale of 1 to 5 (1 being "not useful" and 5 being "very useful"), the reports were found to be clinically useful (average, 3.8), allowing for good confidence in clinical decision-making (average, 3.7). The referring physicians cited the following as the most common problems:
Of the reports, 35.4% contained recommendations for further diagnosis or treatment, and 84.7% of these recommendations were deemed clinically appropriate by the referring physicians. Of the 64.6% of reports that did not have recommendations for further diagnosis or treatment, the physicians felt that 31% should have included a recommendation, the authors wrote. In addition, a third (31.2%) of reports that did not made clear whether the results had been directly communicated to the ordering providers had results that should have been communicated, according to the reviewers.
"A key piece of feedback we got from these referring physicians is that they would have liked to be notified about results more often," Gunn told AuntMinnie.com. "The comment was something like, 'You wouldn't believe the nonsense I get paged about, but important findings don't come through.' So that told us that radiologists should be calling and paging more often. It's good for patients, and it shows referring doctors that someone's reading the scan, someone is paying attention."
The detection and interpretation of abnormalities is the responsibility of the radiologist, but good reporting is a multifaceted process that includes not only documentation of abnormal findings, but also putting these findings in context so that the referring physician can effectively treat the patient. But what if there are terms or practices in radiology that referring physicians aren't familiar with?
"Such items are unlikely to be identified during a radiologist-to-radiologist peer-review process even though they may cause significant angst among referring physicians," Gunn and colleagues wrote.
Including feedback from referring physicians in the process could improve reporting practices in radiology and, at the same time, improve communication and camaraderie between radiologists and referring physicians, according to the authors.
How could this be accomplished? One idea is to develop a Web-based evaluation that the referring physician could choose to fill out at the time of reading the report on the electronic medical record, Gunn's group suggested. Of course, it's unclear whether referring physicians would actually take the time to provide structured feedback on a recurring basis if given the opportunity, and the system would need to be designed so that reports would only be evaluated if they contained errors.
But such a system would have the benefits of anonymization, random selection of reports for review, reduced bias in the selection of reviewers, and physician-specific data that could be collected continuously or on a periodic basis, Gunn and colleagues wrote.
"The primary care doctors appreciated that we reached out to them, that we're aware there could be communication issues in radiology reports, and that we want to improve the service we provide," Gunn said.
Quote from bigskyrad
It is sad that a study from a respected academic institution on the clinical utility of radiology reports would ignore a basic fact that communication works best when at least two individuals are actually involved. The proper two-way communication in the situation studied is supposed to start with the x-ray order. Referring physicians are often grotesquely negligent in their responsibility to communicate relevant clinical information on the order. A "history" of "abdominal pain" is what I would expect from my house cleaner. A proper history on an x-ray order of, say, "subacute RUQ pain and tenderness, fever, leukocytosis" would much more likely result in an ultrasound or CT report that emphasized clinically relevant findings. A single word "history" of "dyspnea" on the order for chest radiography may result in a normal report, while a clinical indication for that same study of "slowly progressive dyspnea - hx of scleroderma" would often result in a carefully worded discussion on the presence or absence of (often subtle) interstitial lung disease and perhaps a recommendation for high resolution pulmonary CT.
Medical students should be taught that the x-ray order is a clinical tool that should be used properly. I illustrate the proper use of this clinical tool in my most recent book, "Practical Radiology - A Symptom Based Approach," published earlier this year by F.A.Davis
Quote from expertrad
I think there is an idea there for any research minded person to look at the improvement in quality with some intervention.