Manual review required with speech recognition software

2018 07 06 21 23 1361 Doctor Dictating 400

With a word error rate that can exceed 7%, clinical documents produced by speech recognition software from dictations need to be reviewed and edited by medical transcriptionists and physicians to avoid significant errors, according to research published online July 6 in JAMA Network Open.

Investigators led by Dr. Li Zhou, PhD, of the Partners HealthCare system in Boston reviewed notes dictated by nearly 150 physicians from two institutions. They found that speech recognition software had an error rate of more than 7 in 100 words. What's more, 1 in 250 errors were clinically significant.

"The comparatively low error rate in signed notes highlights the crucial role of manual editing and review in the [speech recognition]-assisted documentation process," the group wrote.

Dictation services supported by speech recognition and professional medical transcription are widely used by clinicians in the U.S.; however, the quality of speech recognition-assisted clinical documentation hasn't been thoroughly evaluated. As a result, the researchers sought to assess errors made at each stage of this dictation model, which uses speech recognition engines in a "back-end" manner to convert digital physician dictations to text.

These documents are then edited by a medical transcriptionist and reviewed by the dictating physician before they are finalized. Notably, speech recognition is also commonly used by healthcare institutions in a "front-end" approach, in which physicians directly dictate into free-text fields in software and then edit the transcription themselves before signing the report.

In their cross-sectional study, the researchers assessed the back-end model by gathering a stratified random sample of 277 notes -- 83 office notes, 75 discharge summaries, and 59 operative notes -- produced at Partners HealthCare and the University of Colorado Health system in 2016. The notes were dictated by 144 physicians from 35 specialties with the Dragon Medical 360 eScription speech recognition software (Nuance Communications).

The researchers annotated errors in the initial document generated by the speech recognition software, as well as the document subsequently edited by the medical transcriptionist and the final note signed by the physician. They then compared each document with a criterion standard created from the original audio recordings and a review of the medical record. Manual editing led to significant reductions in both the mean word error rate and the number of reports with clinically significant errors.

Errors at each stage of dictation process
  Speech recognition software After revision by medical transcriptionists Final signed note by physician
Mean word error rate 7.4% 0.4% 0.3%
Reports with clinically significant errors 138 (63.6%) 32 (14.7%) 17 (7.8%)

The researchers noted that a recent study found higher error rates in electronic health records than in paper records, a trend that was possibly due to functionality such as templates and the ability to copy and paste text.

"Taken together, these findings demonstrate the necessity of further studies investigating clinicians' use of and satisfaction with [speech recognition] technology, its ability to integrate with clinicians' existing workflows, and its effect on documentation quality and efficiency compared with other documentation methods," the authors wrote. "In addition, these findings indicate a need not only for clinical quality assurance and auditing programs, but also for clinician training and education to raise awareness of these errors and strategies to reduce them."

Page 1 of 603
Next Page