Researchers from Cedars-Sinai Medical Center in Los Angeles performed a randomized clinical trial to compare the performance of sonographers and an internally developed AI algorithm for assessing left ventricular ejection fraction (LVEF). They found that cardiologists had to significantly change the LVEF result more often in the exams that were initially assessed by the sonographer.
"With this new knowledge, we hope to improve the quality and effectiveness of echocardiogram imaging for patients everywhere," said principal investigator and presenter Dr. David Ouyang in a statement from Cedars-Sinai.
Although initial assessment of LVEF is essential for diagnosing cardiovascular disease and making treatment decisions, human assessment of LVEF is often based on a small number of cardiac cycles and can have high interobserver variability, according to the researchers.
To improve on this, the Cedars-Sinai group had previously developed EchoNet-Dynamic, a deep-learning algorithm trained on over 10,000 echocardiogram videos from Stanford University Medical Center to assess LVEF. Previous testing showed that EchoNet-Dynamic produced a mean absolute error of 4.1%-6% for evaluating cardiac function.
In this study, the researchers sought to determine whether either the sonographer's initial assessment of LVEF -- as is typically performed in standard clinical workflow -- or AI-calculated LVEF would be changed more or less often by the cardiologist producing the final report.
They performed a randomized clinical trial involving 3,495 transthoracic echocardiograms, in which the scans were randomly allocated on a 1:1 basis for initial assessment by AI or by the performing sonographer. Blinded cardiologists then reviewed the assessments and provided a final report of LVEF. An LVEF change by the cardiologist of more than 5% was considered to be significant.
|Performance of EchoNet-Dynamic AI algorithm in assessing LVEF
||Initial assessment by sonographers
||Initial assessment by EchoNet-Dynamic AI
|Proportion of studies that were substantially changed (over 5%) by the cardiologist
|Mean absolute difference
The differences were statistically significant (p < 0.001) in demonstrating the superiority of the algorithm, according to the researchers. Furthermore, the cardiologists in the trial couldn't tell the difference between the tracing performed by the AI algorithm or if it was performed by a sonographer, according to the researchers.
That speaks to the strong performance of EchoNet-Dynamic, as well as its integration into clinical software, according to Ouyang.
"We believe these are all good signs for future trial research in the field," he said in a statement from the ESC.
Ouyang said the researchers were excited by the implications of the trial.
"What this means for the future is that certain AI algorithms, if developed and integrated in the right way, could be very effective at not only improving the quality of echo reading output but also increasing efficiencies in time and effort spent by sonographers and cardiologists by simplifying otherwise tedious but important tasks," Ouyang said. "Embedding AI into clinical workflows could potentially provide more precise and consistent evaluations, thereby enabling earlier detection of clinical deterioration or response to treatment."
Copyright © 2022 AuntMinnie.com