RAND study questions whether decision support really works

Decision support for physicians ordering imaging exams has been touted as an effective way to reduce unnecessary imaging and ensure the appropriateness of studies. However, a new report released by RAND that evaluated a Medicare demonstration project of decision support suggests it may not be as helpful as it's cracked up to be.

The Medicare Imaging Demonstration (MID) project was mandated by the Medicare Improvements for Patients and Providers Act of 2008, which directed the Secretary of the Department of Health and Human Services to conduct a demonstration that would educate physicians about the appropriateness of their advanced imaging orders, according to criteria selected by the secretary and entered into computerized physician order-entry (CPOE) systems.

The secretary was also directed to submit a report on the demonstration's findings to Congress, which led to the RAND report. The MID project's decision support did not significantly improve appropriate image ordering -- though it did frustrate physicians, concluded lead author and RAND policy researcher Justin Timbie, PhD, and colleagues.

Appropriate ... or not?

Healthcare consulting firm the Lewin Group designed and operated the demonstration across five "conveners," or organizations responsible for providing and supporting the use of decision-support systems for a collection of physician practices. On assignment from the U.S. Centers for Medicare and Medicaid Services (CMS), RAND undertook the analysis of the project.

The demonstration took place between October 2011 and October 2013. It assessed appropriate ordering of 12 common advanced diagnostic imaging exams: MRI of the brain, knee, lumbar spine, and shoulder; CT of abdomen, abdomen and pelvis, brain, lumbar spine, pelvis, sinus, and thorax; and SPECT myocardial perfusion imaging.

When a physician intended to order one of these exams, he or she had to consult a decision-support system that had been programmed with appropriateness guidelines. The demonstration rated imaging orders as "appropriate," "uncertain," "inappropriate," and "not covered by guidelines." The category of "uncertain" meant that physicians should use discretion because the guidelines for the particular clinical scenario were not definitive.

Eighteen months of orders (the "intervention" period) were compared with an initial six-month baseline time frame at the beginning of the project, during which orders were entered into the decision-support system and rated, but physicians did not receive immediate feedback on the appropriateness of the exam.

Out of 139,757 orders for advanced diagnostic imaging procedures placed during the study time frame, 48,881 (35%) could be rated; the rest fell into the "not covered by guidelines" category -- which caused problems later on, Timbie and colleagues noted.

Imaging order appropriateness in MID project
Time frame Appropriate study Uncertain study Inappropriate study
Baseline 61.5%-81.8% 10.3%-21% 7.8%-18.1%
Intervention 75.1%-83.9% 11.1%-16.1% 5.3%-9%

While appropriate ordering seemed to improve overall between the baseline and intervention periods, the percentage of unrated orders also increased, the authors wrote.

"If the orders 'not covered by guidelines' could have been rated, they may have changed the percentage of appropriate, uncertain, and inappropriate orders," they wrote. "For this reason, the changes in rates do not necessarily indicate an improvement in the appropriate ordering rate over the course of the demonstration."

Timbie and colleagues also analyzed how the decision-support framework affected physicians with high and low ordering volume. Changes in "appropriate" orders between the baseline and intervention periods were not more frequent in high-volume physicians, suggesting that greater use of the decision-support system did not noticeably affect the ordering of appropriate exams.

They also found that decision support did not affect utilization rates of advanced imaging.

"Exposing ordering physicians to appropriateness guidelines for advanced diagnostic imaging over the course of two years had no effect on utilization for physicians ... and where a statistically significant effect was found, its magnitude was very small," the researchers wrote.

Frustrated physicians

Entering and changing orders in the decision-support system affected workflows, with physicians spending an additional three minutes on these tasks. Time spent entering an order -- only to discover that it could not be linked to a guideline -- was particularly frustrating for physicians, according to Timbie and colleagues.

"They might have been willing to spend more time ordering advanced diagnostic imaging if they thought the decision-support system used in the demonstration added value to their workflows, yet physicians largely did not view them as such," they wrote. "Physicians said they would have preferred to receive guidance about different imaging procedures as they were considering placing an order, rather than deciding what to order and then consulting [the system]."

The project was designed to give ordering physicians real-time feedback on whether a particular imaging exam was appropriate: It assumed there were guidelines available for the clinical scenarios that prompted the orders, that these guidelines could be programmed into the decision-support system in a user-friendly way, and that all physicians ordering these exams would benefit from appropriateness feedback.

But some of the project participants were skeptical about these assumptions. While professional societies might seem to be the best source of imaging guidelines, some felt that these organizations have a vested interest in advising that imaging be ordered, Timbie's team wrote. Also, because many advanced diagnostic imaging guidelines depend on expert opinion -- rather than randomized controlled trials or clinical outcomes -- they're subject to differences in opinion and may not match local practice; in fact, one participant estimated that 20% to 30% of the guidelines used in the project conflicted with local standards of care, according to the researchers.

Flawed project?

The MID project didn't show that decision support doesn't work -- just that the project was implemented with an incomplete set of criteria for the task, according to Bob Cooke, vice president of marketing and strategy for the National Decision Support Company (NDSC). In 2012, the firm joined with the American College of Radiology (ACR) to establish ACR Select, a program that brings ACR's appropriateness criteria to the digital world through clinical decision-support algorithms that can be used as part of healthcare IT applications.

"We feel the project design was flawed and limited by its sample size," Cooke told AuntMinnie.com. "Because only a small number of imaging exams were included in the demonstration, it was a challenge for physicians to find clinical scenarios that matched their orders. And there was no opportunity to adapt these scenarios during the project time frame."

ACR Select was not part of the MID project, although the initiative did use some of ACR's appropriateness guidelines, Cooke said.

"The ACR has had appropriateness guidelines for 20 years, in narrative form," he said. "But as more and more doctors use computerized physician order entry, it doesn't make sense for them to have to look up a document. That's why ACR Select was developed."

ACR Select has been integrated into all of the major medical record apps, and it includes many more imaging exams than the 12 that were incorporated in the MID project. The package's servers are recording 200,000 interactions per month, whereas the MID project included about 140,000 for the whole 24-month time frame. An analysis of pilot efforts with ACR Select has shown that it works at least as well as a radiology benefits management (RBM) program, Cooke added.

"The RAND report validates the method we've used with ACR Select," he said. "Having an integrated approach to clinical decision support inside the CPOE platform is critical to physicians adopting its use, as is having a complete set of criteria covering even commonly encountered clinical scenarios."

Page 1 of 603
Next Page