Radiologists used to use transcriptionists to generate and edit reports, but due to cost, transcriptionists have fallen by the wayside and voice recognition software used alone has become standard. However, voice recognition software inserts errors into the text that could have dire consequences.
"My reports -- and I try to be careful -- average seven errors per report, which go from punctuation to ludicrous," said Dr. Michael McNamara Jr. from Case Western Reserve University School of Medicine. "[Voice recognition software] inserts a 'no,' it drops a 'no' -- it's a very dangerous weapon and we have to use it very, very carefully," he said.
McNamara raised his concerns during a presentation by Dr. Nima Momenin from Wayne State University on the quality of voice recognition software versus using a transcriptionist for radiology reporting over time.
The study involved 100 chest x-ray reports from five radiologists sampled retrospectively for four categories:
- Traditional dictation and transcription method before voice recognition
- Voice recognition with transcriptionist correction
- Voice recognition with self-correction three months after implementation
- Voice recognition two years after the implementation of the continuous speech recognition system
The error rate using voice recognition with correctionists was slightly better than the pre-voice recognition rate, according to the researchers; however when the correctionist was removed, the error rate rose to 2.5 times higher in reports generated by voice recognition with self-correction.
The rate of errors after two years of using voice recognition rose to 1.5 times higher than three months after implementation of voice recognition. Fortunately, only two reports in the third category and one report in the fourth category contained major errors, Momenin said.
"Using voice recognition technology results in a high frequency of transcription errors compared to traditional transcription method, jeopardizing the quality of radiology reports," he added. "Most of the errors did not affect the core meaning of the report and are unlikely to alter patient management; however many made strangely worded reports."
Comparing reports made after two years using voice recognition with those generated three months after its implementation showed significant increase of the error rates, which brings into question the value of the "continuous learning" ability of the software, Momenin said.
"Recognition of deterioration in performance suggests that voice recognition must be carefully monitored after installation to maintain report quality, and that prolonged familiarity with the device is not sufficient to sustain report quality," he concluded.