ChatGPT has been tested for one full year now. Those passionate about its possibilities and its risks are now comparing it to more recent GPT releases and noting improvements.
Throughout 2023, AuntMinnie.com has kept you informed about how GPT has been incorporated into various types of radiology department activities for research. As a new year begins, we asked radiologists to weigh in on what has impressed them most and what might be next for GPT.
Kaustav Bera, MD, is a third-year resident at Case Western Reserve University and University Hospitals Cleveland Medical Center. Bera was the lead author of an October 2023 article published in Current Problems in Diagnostic Radiology that assessed literature on ChatGPT in radiology published between January 26 and August 14, 2023.
Bera, Gregory O'Connor, MD, and other reviewers focused on 51 articles that involved ChatGPT and radiology imaging data.
"As an image rich field, researchers and radiologists had to determine ways in which ChatGPT could assist despite its inability to analyze images and reliance on text-based prompts," the authors wrote. Bera predicted that in 2024 we'll see more multimodal capabilities of GPT.
"I think 2024 will be an intriguing year especially with the newly unveiled multimodal capabilities of ChatGPT," Bera continued. "However, the concern is with its ease of use in the hands of patients. I hope that people can take its interpretation with a grain of salt and talk to a qualified radiologist about their images and questions."
Bera's primary research interests include AI and imaging in medicine, particularly in oncology. As a senior research associate, he worked on both radiology and pathology image analysis toward diagnosis, prognosis, and treatment response in multiple cancer types at Case Western Reserve University.
"There are obviously concerns with privacy. That's a conversation people are having," Bera also said, adding that the most interesting application for radiologists or radiology residents would be using and developing a GPT-based educational resource through a locally developed application programming interface (API).
"Going beyond ChatGPT this explosive development of several open source large language models has made it easy for development of small local models for certain trivial tasks that can be almost completely automated," Bera added. "For example, automatic protocoling of studies with only the complicated ones needing to be further checked by the radiologist."
Bera anticipates greater use of large language models (LLMs) for creating simplified patient-centered reports that reduce technical language and jargon.
Radiology department chair
Along those lines, NYU Langone Health continued to fine-tune its multimedia radiology reports that incorporate GPT in 2023 and has since rolled out AI-enabled precision education that uses GPT to change the way radiology residents are trained, explained Michael Recht, MD, Louis Marx Professor and chair of the NYU Langone Department of Radiology.
"What we've devised is a curriculum that is going to be personalized and based on what residents have seen and what they need to see," Recht told AuntMinnie.com. "We asked each specialty to define what pathology each resident should see on each rotation. We then made teaching files of all those pathologies with multiple examples. We used GPT to categorize what pathology each resident had seen by looking at their reports."
However, "we realized that residents weren't seeing a lot of the pathology we wanted them to see," Recht continued. "What we do now is supplement each one of the pathologies with cases from our teaching file. This ensures that every resident sees the entire breadth of pathology they should see in that rotation."
Recht said NYU is also working to develop its own large language models that would not necessarily work in tandem with GPT. Additionally, Recht said there is excitement about the ability of GPT to ingest images and make diagnosis based on images.
"We'll see how much GPT invades into the AI pattern recognition market," he concluded.
Like others, Dania Daye, MD, PhD, thinks about how to incorporate machine learning into daily practice. Daye is assistant professor of radiology at Harvard Medical School and practices interventional radiology at Massachusetts General Hospital.
"Many of us have dabbled in ChatGPT and know how powerful it can be, but it has quite a few limitations that we have to deal with," Daye told AuntMinnie.com. "Ultimately, these are patients' lives and there is no room for error. We have to be very cognizant of when this can fail, while appreciating all the opportunities it can bring to simplify our workflow and improve the way we're connected with our patients."
Daye runs a small lab focused on applications of machine learning for precision medicine, specifically focused on interventions.
"I'm very interested in the applications of large language models for patent-centered care," she said. "I recently gave a talk at RSNA about how do we improve shared decision-making with patients, and some of what ChatGPT can do is simplify or translate medical jargon into languages accessible for our patients."
On patient- and clinician-graded reports that incorporated ChatGPT, the LLM performed well in terms of providing accurate, patient-friendly language or translations, but she cautioned that a learning curve remains.
"One of the alarming things we saw is that when ChatGPT fails, it tends to fail while delivering the translation very confidently," Daye added, emphasizing that there are a number of liability issues that the healthcare system is going to learn to deal with.
Daye is confident that ChatGPT will change the way radiologists practice in the next five to 10 years.
"I do want to point out that in general, big picture, is that these models are as good as the data that is used to train them," she said. "As we start training these models with more targeted datasets, I think we're going to see really fascinating things."
Daye said she is watching consortiums of institutions around the world that are transferring medical data between institutions for larger scale validation. Ultimately, this could speed up the development, adaptation, and adoption of large language models into clinical practice.
Entrepreneur, radiology performance measurement and impact
Ron Vianu leads Covera Health, a company that works with payers and imaging groups to drive quality and accuracy in radiology.
"Overall, there’s an incredible amount of interest in understanding all the possible use cases for LLMs, but there remains a healthy dose of skepticism," Vianu told AuntMinnie.com. "There are no shortages of how folks are trying to leverage LLMs in interesting ways, from generating patient-friendly reports and drafting impressions, to scaling meaningful quality measures or, on the payer side, exploring different ways to eliminate prior authorization."
A key area for large language models to gain traction is in their ability to improve quality performance, according to Vianu.
"AI can and is making it possible to analyze images and reports and unlock new insights into the health of individuals and entire populations," he said. "When we aggregate this information alongside the information contained in EHRs, tests, and other healthcare inputs, LLMs have the potential to help healthcare organizations and their providers better understand and identify gaps and health risks."
However, Vianu said the "buzz" of the day is foundation models.
"There’s a lot of confusion as to how they will evolve, what will the world look like if there are competing models, what these models will be developed to deliver, and how the FDA will approach approving of these models," Vianu explained. "There’s a ton of hype on the transformational nature of these models, but few details on how and when it will impact the field."
Stanford University defines foundation models as models (e.g. GPT-4 for OpenAI, PaLM 2 for Google, Llama 2 for Meta) trained on broad data (generally using self-supervision at scale) that can be adapted to a wide range of downstream tasks. Experts at Stanford's Center for Research on Foundation Models have been outspoken on the issues that include the lack of transparency. Among the points that have been made are that no developer of foundation model ecosystems currently disclose significant information about the downstream impact of its flagship model, such as the number of users, affected market sectors, or how users can seek redress for harm.
In attempt to gain transparency, a draft of the AI Foundation Model Transparency Act was introduced December 22, 2023, sponsored by U.S. Reps. Don Beyer (D-VA) and Anna Eshoo (D-CA). The AI Foundation Model Transparency Act would direct the Federal Trade Commission (FTC), in consultation with the National Institute of Standards and Technology (NIST) and the Office of Science and Technology Policy (OSTP), to set standards for what information high-impact foundation models must provide to the FTC and what information they must make available to the public, according to statement from Eshoo's office. Information identified for increased transparency would include training data used, how the model is trained, and whether user data is collected in inference.