The study is the first to evaluate the literature on AI models developed for managing hip fractures from diagnosis to postoperative outcomes, wrote a team led by Dr. Johnathan Lex of the University of Toronto.
"Applications of AI to aid with diagnosis from hip radiographs are promising. However, current implementations of AI for outcome prediction do not seem to provide substantial benefit," the group wrote.
Suspected hip fractures are missed on up to 10% of initial pelvic x-rays and so developing new techniques to help identify them is imperative, according to the authors. Hip fractures are associated with higher rates of mortality, and AI models may help in these cases, but the quality of their performance and potential impact are currently unknown, they wrote.
To review evidence in the field, the researchers identified 39 studies in the literature -- 18 (46.2%) studies used AI models to diagnose hip fractures on plain x-rays and 21 (53.8%) used AI models to predict patient outcomes following hip fracture surgery.
Eight of the AI studies for diagnosing hip fractures included comparative quantitative data for clinical experts. In pooled analysis, compared with clinicians, the odds ratio for diagnostic error of the AI models was 0.79 (p = 0.36). Also, the analysis found that 13 out of 14 AI diagnostic models reported a sensitivity greater than 80%.
In terms of postoperative predictions, out of 15 studies, the median accuracy of AI models for predicting 30-day mortality was 72.8% and median accuracy for predicting one-year mortality was 85.8%. The mean area under the curve for mortality prediction was 0.84 for AI models compared with 0.79 for alternative controls, according to the findings.
"The potential applications regarding the use of AI to aid with diagnosis from hip and pelvic radiographs are promising," the authors wrote. "However, the use of AI does not seem to provide substantial additional benefit over traditional multivariable predictive statistics."
Ultimately, the barrier AI models may need to overcome relates to how data is reported in studies, the authors wrote. They noted they were unable to compare the use of predictive strategies and the effect they had on algorithm performance due to data and study reporting heterogeneity.
Moreover, the quality of studies was unable to be properly evaluated because an RSNA guideline called the "Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis for Artificial Intelligence" (TRIPOD-AI) is still under development, they wrote.
Nonetheless, at this stage, even small improvements may make a difference in managing patients with potentially devastating hip fractures, they wrote.
"Considering that radiologists and orthopedic surgeons are facing increasing volumes of patients requiring radiographic interpretation for hip pathologies, addition of [AI] models in aiding expert review can accelerate image interpretation and decrease processing times," the group concluded.
Copyright © 2023 AuntMinnie.com