ChatGPT technology shows promise for developing a clinical knowledge system that could provide decision support and thus enable better patient care, according to an editorial published April 13 in the Journal of Nuclear Medicine.
"Whether this technology is a revolution or just an evolution remains to be seen," wrote a team of European and U.S. experts led by Jens Kleesiek, PhD, of University Medicine Essen in Germany. "What is certain in any case is that there will be no more stepping back. As a society, we have the responsibility to shape its future development."
ChatGPT is powered by a generative pretrained transformer (GPT-3.5), which is a large language model (LLM) trained with 175 billion parameters. Launched in November 2022, the chatbot's user base grew even faster than the previous record held by TikTok, reaching 100 million users in just two months, the authors noted.
Since its launch, many publishing companies have banned using the technology as an author for reasons ranging from plagiarism and incorrect or inaccurate information, while others have argued its benefits, such as its potential use in breast cancer prevention and screening.
But the question of what the actual use of ChatGPT is -- despite the hype -- remains, according to the authors.
"We argue that this technology is worth building toward a clinical knowledge system that can provide health and clinical decision support and enable better self-care and patient care in an era of skill shortages," they wrote.
For example, text-to-text generation by LLMs may help autocomplete sentences and paragraphs of a clinical document (for example, a progress report) based on short phrases provided by a human clinician, and thus reduce the documentation burden, the authors suggested. Further, when used to generate clinical documents, LLMs also have the potential to integrate clinicians' observations and their knowledge of clinical guidelines -- thus reflecting real-world diagnosis and treatment patterns.
The use of ChatGPT requires much more research and development, the group noted. To that end, the team developed the first (to their knowledge) clinical LLM, GatorTron. The model consists of 8.9 billion parameters that use more 90 billion words (including 82 billion clinical terms); the group cited a recent study that demonstrated GatorTrom's power in clinical natural language processing.
In addition, the authors have also examined the text generation ability of a model called SynGatorTron, a generative clinical LLM they have developed based on the GPT-3 architecture. There's also ongoing research on LLMs addressing fundamental issues such as incorporating chains of reasoning through selection-inference and chain-of-thought prompting, the group wrote.
"Once current limitations have been addressed, many applications are conceivable using ChatGPT and the next generation of LLMs," the team concluded.