Clinical decision support can cut inappropriate imaging

2013 12 09 16 40 21 596 Binary 0s 1s 250 20131216213435

A meta-analysis published on Tuesday in the Annals of Internal Medicine found that clinical decision-support (CDS) software can have a positive -- if moderate -- effect on appropriate ordering of imaging studies. The researchers stopped short of recommending the use of CDS, however.

A team led by Dr. Caroline Lubick Goldzweig, of the Veterans Affairs (VA) West Los Angeles Healthcare Center, concluded that CDS can moderately improve the appropriate use of diagnostic radiology and also decrease overall imaging utilization by a small amount.

"The use of a 'hard stop' as part of the intervention and use in an integrated care delivery setting may increase effectiveness," the authors noted.

However, the researchers also expressed concern over the sparse amount of scientific data on the potential harms of CDS, such as physician dissatisfaction or a decrease in the number of appropriate imaging tests.

"More data on potential harms are needed before widespread adoption can be recommended," they wrote (Ann Intern Med, April 21, 2015, Vol. 162:8, pp. 557-565).

Yielding improvements?

In work performed for the Veterans Health Administration (VHA) Choosing Wisely Workgroup, the researchers set out to determine if the use of CDS capabilities of electronic health records (EHRs) could improve the appropriate use of radiology studies.

They performed a literature review of articles in PubMed for 1995 through September 2014 and also searched for key references in Web of Science and PubMed, turning up 23 relevant studies that had assessed CDS' effect on the ordering of diagnostic radiology tests in adults. Of these studies, three were randomized trials, seven were time-series studies, and 13 had a "pre-post" study design.

Studies were included in the meta-analysis if they involved EHR-based CDS or if the researchers determined the Web-based or standalone CDS software could be easily integrated into the EHR. The quality of the 23 studies was independently evaluated by two independent reviewers after they extracted data related to functionality, study outcomes, and context.

Two studies were excluded from the quantitative analysis because they did not present sufficient data. The remaining 21 studies included 13 that reported an outcome regarding the appropriateness of image ordering, and 13 that reported an outcome on the use of imaging; five studies reported both types of outcomes.

A moderate benefit

Ten of the 13 studies found a statistically significant benefit for improving appropriate imaging, while one reported a benefit that wasn't statistically significant and two found no effect. A random-effects pooled estimate from all 13 studies found an effect size of -0.48 (95% confidence interval [CI]: -0.71 to -0.26), which is considered a "moderate" effect, according to the researchers.

As for the 13 studies that reported outcomes on imaging use, six found a statistically significant benefit from CDS and seven concluded that it essentially had no effect, according to the group. The random-effects pooled estimate showed an effect size of -0.13 (95% CI: -0.23 to -0.04), which is considered a "small" effect.

In addition, the researchers found "low-quality evidence" to support the conclusion that using CDS with a hard-stop feature may be more effective than CDS without the feature. Hard-stop features prevent an imaging study from being ordered without approval from another person if the software deemed the study to be inappropriate.

There was also low-quality evidence to support the determination that clinical decision support may be more effective in an integrated care delivery setting than in different healthcare delivery environments, according to the researchers.

"Audit and feedback may also increase the effectiveness of interventions, but data are too sparse to draw conclusions," they wrote. "In general, the target of the intervention does not seem to be related to effectiveness of interventions."

The meta-analysis showed heterogeneity in the effectiveness of CDS, the group also noted.

"It has been postulated that most heterogeneity in health IT evaluations is due to details of the context and implementation that go unreported in published studies," they wrote. "We expect the same to be true here."

Limitations and harms

The authors acknowledged that their review was limited by the likely existence of publication bias. While no statistical evidence of publication bias was found, they pointed out that there must be more implementations of EHR-based clinical decision support for improving appropriateness of radiology studies than the 23 papers reviewed.

"Our expectation is that many such interventions are done and never formally evaluated or published," they wrote. "How the results of these implementations differ from the published studies is unknown, but we expect that both effective and ineffective implementations have likely occurred and not been published. This lack of publication is a major impediment toward more rapid learning of how health IT can best be implemented."

The meta-analysis was also limited by the paucity of reports on the potential harms of clinical decision support; one study reported that 9.3% of patients inappropriately did not receive a chest radiograph after CDS was adopted, compared with 1.9% of patients beforehand. Another study of a decision-support tool designed to reduce radiograph studies of the abdominal kidney, ureter, and bladder noted that of 255 total imaging tests, 12 had been performed against the advice of the software and had positive findings.

"Of these 12 radiographs, six were believed to have significantly influenced patient outcomes, making it unclear whether following the locally developed guidance could have endangered the patient," they wrote.

Two other studies shared qualitative information from physician surveys, "which primarily identified lack of interest in using the decision-support tools because of time constraints and perceived inefficiencies," according to the authors.

As a result, more assessments of the potential harms are needed, they wrote. In addition, key information about context and implementation are not reported in the clinical studies.

"This lack of reporting of context and implementation, which is common to many studies of health IT, limits readers' ability to draw conclusions about effectiveness and may perpetuate the belief that these kinds of interventions can be developed separate from the workflow of practicing clinicians and then simply 'turned on' with the expectation that clinicians will know how to use the intervention and use it correctly," they wrote.

Page 1 of 603
Next Page