Study finds docs aren't 'gaming' decision-support software

Referring physicians sometimes don't like it when clinical decision-support software tells them they can't order an imaging exam. But how often do doctors enter inaccurate lab data to try to "game" the system to get the test they want? Not very often, if the experience of Brigham and Women's Hospital is any indication.

In a study published online in the Journal of the American Medical Informatics Association, researchers found that emergency department (ED) clinicians using imaging clinical decision-support software entered laboratory results accurately more than 90% of the time when ordering CT angiography (CTA) exams for pulmonary emboli (PE).

Although they did not assess the motivation behind data-entry errors, the researchers concluded that only about 4.2% of errors could be from doctors trying to "game" the system to order certain tests.

"We were surprised that the error rate was so low," lead author Dr. Anurag Gupta, an informatics fellow at Harvard Medical School and Brigham and Women's Hospital, told AuntMinnie.com. "Further, we were surprised that the proportion of errors that could be potentially classified as 'gaming' was so low."

Decision-support backlash?

Previous studies have shown benefits from the use of clinical decision-support software, and the researchers sought to evaluate the effectiveness of their application, such as the frequency of errors in using the tool, whether errors could be stratified, and if it's possible to game the tool, Gupta told AuntMinnie.com. They also wanted to investigate if they could devise methods to improve the integrity of the software (J Am Med Inform Assoc, July 25, 2013).

The group examined the specific case example of clinical decision support for ordering CTA exams for PE, as those imaging studies require ED physicians to enter D-dimer results at the time the exams are ordered. These values could easily be compared later with the actual laboratory results to assess the accuracy of data entry.

The institution uses the Percipio (Medicalis) Web-based computerized physician order-entry system for imaging; clinical decision-support functionality launches on the system based on the type of examination ordered, as well as the clinical data entered by the ED clinicians, according to the researchers. In CTA for PE orders, clinicians are required to input data designed to calculate a Wells criteria score for assessing PE risk. They also have to input the D-dimer value from the laboratory report.

If the calculated Wells score shows a high pretest risk of PE, then the system allows the imaging request. In cases with a low pretest risk of PE based on the Wells score, the system references the clinician-entered D-dimer value; an elevated D-dimer value enables the order to proceed, according to the researchers. If no D-dimer value is available, the system recommends ordering a D-dimer or waiting for the result if it's already been ordered.

"Similarly, if a normal D-dimer value is inputted for a low-risk patient, the [clinical decision support] recommends not obtaining a CTA, as per the [evidence-based] guidelines," the authors wrote.

Few data-entry errors

During 2011, 1,296 patients received orders for CTA studies for suspected PE at the researchers' institution. Of these, 1,175 (90.7%) had accurate D-dimer values input by clinicians during the ordering process on the clinical decision-support system. The remaining 121 cases had data-entry errors; 78 (64.5%) of these errors occurred after D-dimer results were available, while 43 (35.5%) took place before they were available.

In 55 of these cases (45% of data-entry errors and 4.2% of the overall patients), data were entered incorrectly in a way that avoided alerts from the decision-support system and potentially led to overuse of CTA, according to the researchers. The remaining errors did not affect workflow.

Of these 55, 15 errors involved clinicians indicating that the D-dimer result was elevated, despite a laboratory-returned D-dimer that was normal and available before the imaging order was placed. The other 40 involved clinicians stating that a D-dimer result was not ordered, despite a laboratory-returned D-dimer that was normal, according to the authors.

Clinicians eventually canceled 21 (38%) of these 55 imaging requests before CTA was performed, while the other 34 scans were completed but found no PE.

"Although our data confirm the accuracy of clinician-entered data for imaging [clinical decision support] in the majority of cases, performance improvement opportunities remain," the authors wrote. "On the basis of our findings and the small number of studies identified in the literature, there may be an inherent limitation to clinician data-entry accuracy of ~90-95%."

Clinician motivation

The study design did not allow the assessment of clinician motivation during data entry, Gupta said. While the researchers would like to investigate this topic, it would be a difficult study to undertake in terms of resources and timing.

"Since the error rate was so low anyway, it may not be significantly beneficial to investigate clinician motivation," he added. "While the [clinical decision-support] tool we evaluated had very high integrity, we found that adherence to evidence-based guidelines improved to 75% and that we need to develop strategies to further improve that number, which we plan to investigate."

Minimizing redundant data entry with autopopulation of required clinical data from the electronic medical record may improve efficiency, accuracy, and safety, the researchers noted.

"However, quality improvement strategies including retrospective sampling of clinical data entered into [clinical decision support] and academic detailing when appropriate are probably needed to minimize the small portion of erroneous data entries that may be perceived to be motivated by the desire to avoid intrusive computer interactions and alerts," they wrote.

Page 1 of 603
Next Page