ACC-based software boosts appropriateness of heart scans
Article Thumbnail ImageJuly 23, 2012 -- American College of Radiology guidelines aren't the only ones being used to develop software to rein in inappropriate utilization. A new decision-support tool based on American College of Cardiology (ACC) appropriateness criteria demonstrated its value in reducing unnecessary cardiac imaging exams in a recently completed pilot project.
Member Sign In:
MemberID or E-mail Address:  
Do you have a AuntMinnie.com password?
No, I want a free membership.
Yes, I have a password:  


The investigational multimodality tool increased physicians' use of the right tests over time, concluded a new study from Weill Cornell Medical College and several other U.S. institutions.

The prospective trial involved 100 doctors and almost 500 patients scanned over eight months. Use of the software tool to gauge appropriateness increased the proportion of appropriate tests from less than half in the first two months of the study to more than 60% in the last two months, while inappropriate studies dropped from 22% to 6%.

Plans for changes in medical therapy also increased from 11% to 32%, probably based on the use of fewer inappropriate tests that required no change in medical therapy, researchers told AuntMinnie.com.

"We were able to use the appropriate use criteria decision-support tool to quickly arrive at appropriateness for a large number of indications across multiple modalities," said Dr. Fay Lin, lead author and associate professor of cardiology and radiology at Weill Cornell. "Over time, with use of the decision-support tool, the number of appropriate tests increased and the number of inappropriate tests decreased."

Lin's co-investigators included Allison Dunning and Dr. Jagat Narula from Weill Cornell, Dr. Leslee Shaw from Emory University School of Medicine, and Dr. James Min and Dr. Daniel Berman from Cedars-Sinai Heart Institute. Data from the study were originally presented at the 2012 American College of Cardiology meeting; the research was recently submitted for peer-review publication.

The trouble with RBMs

In this study, the automated software tool replaced the role of radiology benefits managers (RBMs), which are employed in 90% of the largest healthcare firms and are the real gatekeepers for imaging exams. RBMs approve or reject requests for heart scans based partly on appropriateness guidelines and partly on proprietary insurer-specific criteria that are not disclosed.

RBM requirements for prior authorization have drawn objections from practitioners and physician specialty societies, who contend that their proprietary approaches are not consistently rooted in scientific evidence, they lack transparency, and they differ across geographical regions.

"If you go through an RBM, all you really know is whether the test was denied or accepted; you don't find out whether it was denied due to general appropriateness criteria or for other reasons like proprietary radiology benefit manager guidelines," Lin said.

Critics of RBMs also contend that incentives to reduce costs may result in indiscriminate reduction of medically necessary imaging, unnecessary delays in healthcare delivery, and worse quality of care.

Through an agreement with UnitedHealthcare, one of the study's sponsors, the investigators were able to bypass the RBMs and let the physicians and the decision-support tool direct whether a test was performed. Having the cooperation of the insurer allowed the researchers to evaluate the effects of the tool itself without restrictions, as physicians were able to order tests even if they were judged inappropriate by the ACC guidelines and the software. That freedom was necessary for the study's validity, Lin said.

"Otherwise, in terms of study design, you would have a real selection bias toward higher appropriateness," she said.

The study employed an automated, Web-based, point-of-order, multimodality decision-support tool (CycleMD, MDDX) as an alternative to RBMs for determining rates of appropriateness for noninvasive coronary artery disease imaging. The researchers also hoped to assess the impact of the decision-support tool on downstream clinical decision-making and testing utilization.

ACC appropriate use criteria

To address the high costs and potential overuse associated with cardiac imaging, the ACC in 2009 developed appropriate use criteria for noninvasive imaging, including myocardial perfusion scintigraphy (MPS), stress echocardiography, and coronary CT angiography (CTA), the study authors explained.

The application of appropriate use for these imaging technologies has been looked at in several single-modality settings, with substantial amounts of inappropriate testing reported. To date, however, interventions to improve appropriate testing have not shown systematic improvement, according to Lin and colleagues.

The authors hypothesized that a real-time, multimodality decision-support tool would be more effective. To this end, they examined 472 patients with suspected coronary artery disease seen at three St. Louis practices between June 2010 and January 2011.

The physicians prospectively entered a patient's coronary artery disease risk factors and history at the time they used the software tool. They categorized risk factors and history as binary variables and classified symptom presentation as typical, atypical, noncardiac, or asymptomatic.

The physicians most frequently ordered myocardial perfusion scintigraphy (n = 338, 72%), followed by stress echo (n = 111, 24%) and coronary CTA (n = 23, 5%). Among the 100 participating physicians, 94 ordered one or more MPS exams, 38 ordered one or more stress echo exams, and eight ordered one or more coronary CTA scans.

More appropriate tests

Appropriate tests increased from 49% in the first two months of the study to 61% in the last two months (p = 0.02), the authors reported. Meanwhile, the frequency of inappropriate testing decreased from 22% in the first two-month period to 6% in the last two-month period (p = 0.0001).

More than half of studies considered inappropriate (56% of all studies) were for clinical scenarios addressing three low-risk patients, the authors stated.

The study also revealed other important findings: first of all, that the software was very fast, Lin said.

"Generally it takes people about a minute or a minute and a half to navigate through to the end decision, so even with some data entry it's much faster than going through an RBM," Lin said. "In theory, you could save not only the administrative hassle of your own staff and the physicians having to take the time to go through an RBM, but you could probably save the administrative cost of the RBM altogether."

Second, physicians are able to immediately see the results of their decisions on a kind of point-of-order model that provides rapid continuous learning. As a result, "what we were able to see was a decline over an eight-month period of inappropriate tests and an increase in appropriate tests," she said.

All of this progress occurred without affecting downstream testing rates or other factors. "This shows that physicians can learn and adapt their behavior to common clinical guidelines and standards over time," Lin said.

In a couple of prior studies with single-modality decision models, physicians didn't see the results of their decisions until much later -- too late to have an educational effect, she said. The lack of continuous feedback is probably why the appropriateness rate didn't increase appreciably in those studies.

Another interesting effect was that as the study progressed, more physicians reported that they would intensify medical management, without a change in the number of patients who went to downstream testing or catheterization, Lin said.

"We think this is mostly related to the decline in inappropriate studies, because physicians tended not to intensify medical management after an inappropriate study," she said. "That's why they were inappropriate in the first place -- they weren't going to affect medical management."

Future plans

Going forward, the design also allows easy compilation of data on positive and negative predictive values, and the ability to link test appropriateness with clinical outcomes.

The investigative software has been upgraded since the study ended, co-author Dr. James Min told AuntMinnie.com. While the original software could only determine whether a specific proposed test was appropriate or inappropriate, the latest iteration is able to suggest tests that may be more appropriate for a given patient and indication.

The software upgrade was facilitated by the standardization of ACC appropriateness criteria across modalities. Since 2009, ACC has put much effort into rationalizing the various cardiac imaging criteria so that patient groups and clinical criteria are the same across modalities, Lin said.

The next steps will include studies of larger cohorts in different modalities, especially CT, and comparing exam appropriateness with the software versus RBMs, Min said.

Since the study was completed, more institutions have been recruited globally to participate in future research, with the aim of commercializing the product in the near term.

"If you implement something like this, you can significantly reduce the time needed for getting tests approved," Min said. "The second thing is that the decision-support tool teaches in real-time what things are appropriate and what is egregiously inappropriate and wrong to do. There are a lot of things you can do with this data."

Study disclosures

Co-author Min is a consultant for MDDX with equity interest in the firm. Research support was provided by UnitedHealthcare, MDDX, and the Michael Wolk Heart Foundation.