Road to RSNA 2023: Imaging Informatics Preview

In many respects, AI has come to dominate the field of imaging informatics research, a trend that will be evident at RSNA 2023. 

In this stop on the Road to RSNA,  we take a look at AI as it is being tested in various imaging modalities for communication, analysis, efficiency, organization, and problem-solving. Among them are tests of the effectiveness and accuracy of large language models (LLMs) such as ChatGPT, Bard, and others. We bring you previews of comparison studies of LLMs in generating radiology reports and consumer-friendly health information to support lung cancer prevention, screening, and education.

With the accuracy and appropriateness of LLM outputs providing major obstacles that hinder formal adoption of these models in healthcare interactions, researchers are assisting some LLM qualification efforts in radiology through prompt engineering and adjudication, as we’ll point out in a few of the previews in this sampling. 

One session will look at potential integration issues with AI into the radiology workflow in real-world clinical environments. The presenters say such issues can be solved with the addition of a cloud-based software layer. Next, we’ll look at the performance of classification models for correctly prioritizing STAT imaging orders in the emergency room and outpatient services. 

Likewise, an imaging informatics track features a study using standardized, consensus-based macros to improve tracking and follow-up of actionable incidental findings. How much did radiologists use the tool and how easy was it to adopt? This session may prompt you to think about how these findings are handled in your own institution.

In a take on natural language processing (NLP)  for the purpose of gathering and reporting real-world evidence, researchers have explored the use of the technology for helping -- via analysis of radiology reports -- to assess the performance of cancer therapeutics after regulatory approval. We also chose an example of another AI model trained to detect errors in radiology reports.

Other sessions of note magnify "white box" and "black box"  attacks on AI diagnosis models; optimized settings for patient-specific 3D printing; and so-called "zero-shot" applications to speed up LLM training for radiology applications.

See below for more information on these presentations and also check out the RSNA 2023 meeting program for more imaging informatics content at the meeting. Be sure to also visit the technical exhibits, as well as get set for hands-on educational AI experiences in RSNA’s AI Deep Learning Lab.


How ChatGPT, Bard ranked in appropriateness of lung cancer answers

Sunday, November 26 | 1:30 p.m. - 1:40 p.m. | S4-SSCH02-4 | Room N228

While ChatGPT and Google Bard were both able to answer nonexpert questions about lung cancer prevention, screening, and terminology commonly used in radiology reports, ChatGPT won this battle of large-language models, according to this study.


How GPTs, Bard performed for readability, complexity of lung cancer health information

Sunday, November 26 | 1:50 p.m. - 2:00 p.m. | S4-SSCH02-6 | Room N228

This session will reveal how well large-language models summarize and simplify health information about lung cancer prevention and screening.


Team determines optimal parameters for patient-specific 3D printing

Sunday, November 26 | 1:30 p.m. - 1:40 p.m. | S4-SSIN01-4 | Room S401

In this talk, researchers from the University of Pittsburgh will present optimized settings for 3D printing of femurs in patients with cam-type femoroacetabular impingement.


The cloud can enable better radiology AI integration

Monday, November 27 | 9:30 a.m. - 9:40 a.m. | M3-SSIN02-1 | Room S404

Adding a layer of cloud-based software may improve the integration of AI results into the PACS, radiology report, and EHR, according to a proof-of-concept study to be presented Monday.


Machine-learning model prioritizes STAT imaging orders

Monday, November 27 | 9:40 a.m. - 9:50 a.m. | M3-SSIN02-2 | Room S404

If you have considered that a “STAT” designation on imaging orders could be overutilized, misused, or obscure the urgency of the order, you will want to hear how an automated system has been designed to prioritize some STAT orders to the top of the queue.


Macro tool tracks incidental findings for follow-up

Monday, November 27 | 9:50 a.m. - 10:00 a.m. | M3-SSIN02-3 | S404

When a large academic health system used standardized consensus-based macros to track actionable incidental findings most found the macros helpful because they saved time and ensured appropriate follow-up, according to a project to be presented Monday.


NLP technique eases reporting requirements

Monday, November 27 | 10:10 a.m. - 10:20 a.m. | M3-SSIN02-5 | S404 

Using natural language processing (NLP) for real-world evidence gleaned from radiology reports could relieve burdens on those with reporting requirements related to postmarket safety of cancer drugs, according to a Monday morning presentation.


AI model shows promise for spotting errors in CXR reports

Monday, November 27 | 10:20 a.m. - 10:30 a.m. | M3-SSIN02-6 | S404 

A deep-learning algorithm may be able to help reduce diagnostic errors in radiology, according to this presentation on Monday morning.


NLP tool helps make radiology reports more readable

Monday, November 27 | 1:50 p.m. - 2:00 p.m. | M6-SSNPM01-3 | E351

Researchers from the Yale School of Medicine have developed a patient-centric radiology literacy tool using natural language processing (NLP) technology and prompt engineering to produce radiology reports that are more readable for patients.


ChatGPT-4 delivers differential diagnosis on abdominal imaging

Tuesday, November 28 | 3:00 p.m. - 3:10 p.m. | T7-SSGI11-1 | E351

This Tuesday afternoon session will show how ChatGPT-4 performed for producing a differential diagnosis from transcribed radiology reports of abdominal imaging exams.


Are LLMs effective without specific training?

Wednesday, November 29 | 10:00 a.m. - 10:10 a.m. | W3-SSIN05-4 | S402

Researchers seeking to speed up the development of large-language models (LLMs) are exploring where and how to use a training technique called zero shot, as this Wednesday session will cover.


Novel method aims to stop white-box, black-box attacks on AI diagnosis models

Wednesday, November 29 | 10:20 a.m. - 10:30 a.m. | W3-SSIN06-6 | S401

Citing vulnerabilities with deep learning-based mammographic breast cancer diagnosis models, a University of Pittsburgh research team has developed and evaluated a novel technical framework that could defend against adversarial attacks on AI software.


Locally derived calibration sets prove useful for predicting AI performance

Thursday, November 30 | 9:30 a.m. - 9:40 a.m. | R3-SSIN07-1 | N227B

In this presentation, researchers will reveal how they were able to improve AI algorithm performance by utilizing local data.


Seeking trustworthy automated detection for hip implant

Thursday, November 30 | 9:50 a.m. - 10:00 a.m. | R3-SSIN07-3 | N227B

A deep-learning classifier designed to create a deep-learning “trustworthiness pipeline,” in this case for a total hip arthroplasty (THA) aid in clinical settings, recognized 28 implants on plain hip radiographs, according to recent research out of the Mayo Clinic in Rochester, MN.

 

AI reduced CT brain reporting time in teleradiology

Thursday, November 30 | 10:10 a.m. - 10:20 a.m. | R3-SSNR15-5 | S402

An AI algorithm deployed to assist with noncontrast CT brain reports in Australia cut radiologist reporting time by up to 8.5%, according to this scientific session.

Page 1 of 2
Next Page