5 ways to create a culture of safety in radiology

2016 05 20 11 43 46 61 Doctor Shh Secret 200

Having evaluated dozens of radiology practices for their departmental, peer review, and quality and safety programs, I have seen recurring patterns of reluctance by radiologists to transparently perform overreads and double reads, and then report discrepant interpretations.

There is a human tendency in a collegial professional setting to minimize, discount, and often frankly ignore the oversights of previous readers. The problem is less apparent when reading outside imaging studies, but it is still present.

Dr. Nicolas Argy, JD.Dr. Nicolas Argy, JD.

It's an issue that's much bigger than radiology. A recent Johns Hopkins University study that identified medical errors as the third leading cause of death in the U.S. has made headlines. A 2015 Institute of Medicine report that chronicled the high incidence of diagnostic errors has also been highly publicized.

I have interviewed department chairs and radiologists in charge of quality and safety, and I hear two recurring themes. Often I am told that the practice has superb radiologists who rarely, if ever, miss a finding. Others confess the overwhelming challenge of convincing their colleagues to participate in meaningful peer review. There is often a culture of blame and shame and a strong sense of possible punitive outcomes or discipline from reporting errors.

Beyond our professional ethical obligations, radiologists are required based on Joint Commission, American College of Radiology (ACR) accreditation, and hospital credentialing to engage in peer review. The ACR modality certifications require a dedicated program of peer review and tracking. The requirements for these programs are quite explicit from the ACR. They must:

  • Include an assessment of double reading (two physicians interpreting the same study)
  • Allow for random selection of studies to be reviewed on a regularly scheduled basis
  • Assess the agreement of the original report with subsequent review (or with surgical or pathological findings)
  • Adopt a classification of peer review findings with regard to level of quality concerns (such as a four-point scoring scale)
  • Establish policies and procedures for action to be taken on significant discrepant peer review findings for the purpose of achieving quality outcomes improvement
  • Summarize statistics and comparisons generated for each physician by imaging modality
  • Summarize data for each facility/practice by modality

Many practices merely pay lip service to the peer review program, and many software programs such as the ACR's RadPeer or its equivalent are populated with data indicating 99% concurrent readings from fellow radiologists.

But is this realistic?

The data are clear and have been reported in the literature for 60 years. Dr. Leonard Berlin reviewed the topic in 2007, highlighting the work of Dr. L. Henry Garland showing a 50% discrepancy rate in interpreting chest x-rays between radiologists. More shocking were discrepancies by a single radiologist interpreting the same exam twice, with internal inconsistency quoted as exceeding 20%.

Plain radiography films are only one example. Up to 60% of normal mammograms in patients who later develop cancer are deemed abnormal in retrospect. Discrepancy reports in the literature for advanced imaging modalities such as CT, MRI, and ultrasound show rates to be in the 20% to 30% range. Garland concludes that radiologists miss 30% of positive findings.

ACR data suggest discrepancies should run in the 3% range, but this benchmark suffers the weakness of relying on self-reported data, and it likely grossly underreports by an order of magnitude the real incidence published in scientific studies.

Changing the perception that data reporting can result in punitive outcomes requires a concerted effort. Objective criteria with established policies and procedures to address discrepancies must be promulgated. The emphasis must be on education and remediation.

Single cases of discrepancies should never be the basis for punitive action other than to identify why the error occurred, and, when appropriate, they may include a root-cause analysis. Identifying strengths and weaknesses within individual practitioners can lead to additional training, or focusing individuals' practice on modalities in which they excel.

Moving from a culture of silence to a culture of safety (COS) starts with a commitment to transparency and just culture. COS surveys have been validated as creating cultural change and impacting care. The U.S. Agency for Healthcare Research and Quality provides free medical office surveys that can be tailored to the radiology practice.

Departments and groups must adopt a philosophy that mistakes, oversights, and errors are opportunities to improve. As a chairman of radiology for almost a decade, I relished identifying cases with findings that were missed by two or more radiologists.

Why? If two radiologists miss the same finding, it is likely subtle and provides an excellent teachable moment. Monthly conferences with peer review of interesting anonymized cases provided an opportunity to learn from each other's errors and hopefully prevent them in the future. All radiologic technologists were invited to participate since they are the first set of eyes to see images and to see patients. Technologists can be vital in reducing errors.

A comprehensive manual of best practices and methods to enhance a culture of safety in the radiology department is beyond the scope of this article, but here are the top five interventions to create an environment where discrepant readings are identified, reported, and used as learning opportunities:

  1. Complete a radiology-specific culture-of-safety survey to gauge the attitudes of the radiologists and staff in identifying and reporting both erroneous readings and problems within the department. Repeat yearly to measure progress.
  2. Create programs that encourage the reporting of discrepant readings, including rewards for those coding the most exams, and further create minimum thresholds for peer review participation (five exams per day to be coded) with accountability measures in place.
  3. Schedule monthly interesting case/peer review conferences with invitations to the entire staff to review the many opportunities to learn from each other.
  4. Work closely with your emergency department (ED) and surgical colleagues, using ED bounce-backs and discrepant surgical findings as rich opportunities to enhance diagnostic accuracy.
  5. Employ a radiology trigger tool to highlight cases that warrant additional review.

There are those who believe that the culture I espouse is a pipe dream, since no one reports their own errors or those of a friend or colleague. Many practice in this opaque environment.

I view the matter as requiring a perspective change. The radiology group needs a new philosophical mandate that focuses on personal and departmental excellence, an attitude change to practice better today than yesterday, and learn from errors. We must enhance the team milieu of patient-centered service as a mark of our commitment to our friends, neighbors, and the community we serve.

Progress is already being made. For example, the radiology department at Lahey Hospital and Medical Center in Burlington, MA, is proactively educating its residents and staff on the importance of a culture of safety, and is performing training with a simulation root-cause analysis with role play.

Socrates said 2,500 years ago that the life unexamined is not worth living. We can do better and we should. We owe this to ourselves and our patients.

Dr. Nicolas Argy, JD, practiced radiology for 30 years. He is currently a healthcare consultant, educator, and lecturer on medicolegal topics. He can be reached by email at [email protected], on LinkedIn, and on Twitter at @NicolasArgy.

The comments and observations expressed herein are those of the author and do not necessarily reflect the opinions of AuntMinnie.com.

Copyright © 2016 Dr. Nicolas Argy

Page 1 of 253
Next Page