Not all clinical decision support software is created equal

2014 01 23 14 51 33 773 Data Stream 200

A new study has found fundamental differences in the effectiveness of clinical decision support (CDS) based on the type of software that's used. In a study published in the Journal of the American College of Radiology, researchers found that radiologists can become frustrated if CDS software is not easy to use.

Researchers from the Cleveland Clinic Foundation assessed the performance of two commercially available electronic CDS systems, and found that there's more work to be done to make decision support effective. In fact, users were annoyed by certain aspects of the decision support systems that were tested (JACR, April 2015, Volume 12:4, pp. 349-357).

"Decision support isn't the 'end-all, be-all,' " lead author Erika Schneider, PhD, told AuntMinnie.com. "It's a stepping stone in a larger process, and the infrastructure behind pathways of care needs to be further developed."

Evaluating appropriateness

Schneider and colleagues evaluated the appropriateness and predicted whether an exam would be reimbursed for 2,000 consecutive orders for outpatient MRI and CT scans, using Medicalis' SmartReq and Nuance Communications' RadPort clinical decision support systems during two different three-month periods.

Both systems used appropriateness criteria from the American College of Radiology (ACR) for assessing exams, with grades applied using the ACR's nine-point scale, with inappropriate studies rated 1 to 3 and listed as red; moderately appropriate studies scored 4 to 6 and rendered in yellow; and appropriate studies ranked 7 to 9 and listed in green.

In addition, the Medicalis software included two additional categories: "indeterminate," in which more information was required from users, and "not validated," in which there were no appropriateness criteria established. For both categories, SmartReq would require users to input additional information about the requested study to help the software match a published guideline.

SmartReq scored 60% of the exams as "indeterminate" (lacking sufficient information) or "not validated" (having no guidelines to support the order). Excluding these cases, RadPort scored more exams as appropriate than did SmartReq (80% versus 51%) and predicted whether an exam would be covered by insurance significantly more often (76% versus 58%).

For both software applications, 98% of examinations with high- or moderate-utility ratings were reimbursed. Of orders that required preauthorization, 19% were paid despite low-utility ratings, Schneider's group wrote. Only 0.8% of exams with high or moderate appropriateness scores were denied reimbursement.

But why almost 20% of low-utility exams were reimbursed is a mystery, Schneider said.

"Why were these low-utility exams reimbursed at all?" Schneider said. "Either the decision support systems didn't fully capture the patient's situation, or the payor wasn't paying attention."

The main difference between the two clinical decision support systems, and the strongest influence on outcomes, was how exams without relevant guidelines or with insufficient information were handled, Schneider and colleagues wrote. Nuance's RadPort supplemented published guidelines with clinical best practice guidelines, while Medicalis' SmartReq used pop-up windows to solicit additional information from users. This sparked frustration, the team found.

"The Medicalis CDS system required extra user effort to address the indeterminate exams; the additional questions and pop-up windows were not acceptable to the study team," Schneider and colleagues wrote. "User discontent was pronounced after completing the multiscreen Medicalis CDS process and receiving an 'indeterminate' or 'not validated' order. Operationally, we believe CDS will be clinically accepted only if the vast majority of exams provide guidance, rather than the 40% found here."

What's the effect?

What exactly is the effect of clinical decision support systems? Use of CDS doesn't necessarily translate to finding disease more accurately, the team wrote, citing a 2010 study (American Journal of Managed Care, February 2010, Vol. 16:2, pp. 102-106).

"Electronic point-of-care CDS systems [have been] reported to reverse the 8% annual growth rate of CT, MRI, and nuclear medicine exams ... in the outpatient and emergency settings," the team wrote. "In addition, imaging CDS implementation led to a greater percentage of orders meeting evidence-based guidelines. However, no statistical difference was found in the proportion of tests with positive findings or in the proportion of tests likely to affect patient care."

Whether physicians are willing to work with a CDS system depends on how easy it is to use -- and how results are presented, according to Schneider.

"Our most important finding is that completeness of indication content and ease of use in a clinical decision support package are going to drive clinical acceptance," she told AuntMinnie.com. "And the process needs to be a guiding rather than a punitive one. Our users were very frustrated when they had to put in additional effort to get an exam rating, which was then 'low-utility.' It would be better if these packages suggested alternative exams or care paths."

Page 1 of 603
Next Page