New tool automates unstructured report analysis

Although many practices and institutions have adopted structured radiology reporting, the majority of reports are still unstructured. Even in those facilities that have aggressively implemented structured reporting, thousands of unstructured reports still exist in the RIS database.

Assessing diagnostic yield and recommendations in a practice, particularly for high-cost or high-volume procedures, requires a retrospective analysis of reports generated by the group. To date, this has required either assigning the task within the practice, or outsourcing it for analysis. However, either of these choices can result in a faulty analysis.

"An interesting but subjective and time-consuming solution is the manual analysis of 'representative' radiology reports to determine the appropriateness of all imaging procedures," wrote the authors of a recent study in Radiology (February 2005, Vol. 234:2, pp. 323-329). "This approach, however, is likely to be hampered by extrapolation of entire ordering and recommending practices based on results of assessment of small study cohorts."

A team from the radiology department's division of computing and information services, at Massachusetts General Hospital and Harvard Medical School in Boston, has developed a new information-theory-based computer algorithm, Lexicon Mediated Entropy Reduction (LEXIMER). The application was recently put to the test of independently analyzing and classifying unstructured radiology reports on the basis of the presence of clinically important findings and recommendations for subsequent action.

The developers pitted two radiologists against LEXIMER in categorizing 1,059 consecutive radiology reports from January through March 2000. The reports were classified according to whether they contained clinically important findings and recommendations for subsequent action.

The reports were comprised of the results of barium studies, as well as CT, mammography, MRI, nuclear medicine, PET, radiography, ultrasound, and vascular procedures. The accuracy, sensitivity, specificity, and positive and negative predictive values of LEXIMER for placing reports into the correct categories was statistically assessed. The consensus opinion of the two radiologists was regarded as the reference standard for categorizing the radiology reports, the authors stated.

"The study cohort represented reports from 42 staff radiologists from the full spectrum of the major radiology subspecialties at our institution; this spectrum includes pediatric, abdominal, thoracic, vascular and interventional, and musculoskeletal radiology divisions, as well as neuroradiology and nuclear medicine divisions," the authors wrote.

Neither the radiologists nor the LEXIMER application had access to either clinical reports or previous radiology reports for the study group. The researchers observed that there was strong interobserver concordance between the radiologists for report classification, with agreement on 1,045 of 1,059 reports. The remaining 14 reports were classified after a consensus resolution of differences between the radiologists.

"Interestingly, contrary to the usual perception, the results of our study showed high diagnostic yield and low rates of recommendations in radiology reports for most imaging modalities except mammography (which is predominantly a screening modality) and CT scanning (for which recommendations are higher than for other imaging modalities)," the authors noted.

The overall frequency of clinically important findings in the reports assessed by the radiologists was 66.8%, while the frequency of recommendations for subsequent action was 16.3%.

The LEXIMER program classified the overall frequency of clinically important findings in the same reports at 66%, and the frequency of recommendations for subsequent action at 16.1%.

Although the radiologists and the algorithm produced strikingly analogous results in the unstructured report analysis, the time to process the data was dramatically different.

"The total time taken by the two radiologists for the categorization of 1,059 radiology reports was 21 hours two minutes (average, 10.6 hours), whereas the time taken by LEXIMER for classification of these reports was 24 seconds," the authors wrote.

The developers believe that a server-based operation of optimized code in LEXIMER will produce a tenfold increase in the algorithm's performance. And the team sees marketable potential for the application and has submitted a U.S. patent request that is currently pending approval.

"We believe that direct assessment of radiology reports with LEXIMER will provide a better estimate for measuring radiology service utilization compared with metrics such as examination volume and growth rates," the authors wrote. "In addition, utilization of high-cost examinations can be evaluated to stratify individual physicians or groups ordering practices, benchmark individual physician practices against those of physician cohorts or nationally observed standards, monitor practice change within groups, and assess effects on education and retraining processes."

By Jonathan S. Batchelor
AuntMinnie.com staff writer
February 18, 2005

Related Reading

Automated report tracking system serves clinical, educational goals, January 18, 2005

Rads urged to standardize reporting of vertebral fractures, November 23, 2004

Technology, technique help ease radiologist shortage, November 2, 2004

Optimizing alternative reporting strategies, July 16, 2004

Imaging reports also benefit from improved signal-to-noise ratio, May 3, 2004

Copyright © 2005 AuntMinnie.com

Page 1 of 603
Next Page