Large language models (LLMs) are vulnerable to cybersecurity threats and security measures are needed to prevent them from being used maliciously in the healthcare system, according to a report published May 14 in Radiology: Artificial Intelligence.
LLMs such as OpenAI’s ChatGPT and Google’s Gemini can be exploited in several ways, such as malicious attacks, privacy breaches, and unauthorized manipulation of patient data, noted a team led by lead author Tugba D’Antonoli, MD, of University Hospital Basel in Switzerland.
“Moreover, malicious actors could use LLMs to infer sensitive patient information from training data. Furthermore, manipulated or poisoned data fed into these models could change their results in a way that is beneficial for the malicious actors,” the group wrote.
LLMs have rapidly emerged as potentially powerful tools in radiology, with models being evaluated for diverse tasks such as clinical decision support, patient data analysis, and enhancing communication between healthcare providers and patients, for instance. Also, an increasing number of providers are exploring ways to integrate LLMs into their daily workflows.
While integrating LLMs in clinical practice is still in its early stages, their use is expected to expand rapidly, and thus it is crucial to start understanding their potential vulnerabilities now, D’Antonoli and colleagues wrote.
In their report, the researchers explained that LLMs are vulnerable to threats such as inference attacks, instruction-tuning attacks, or “denial of service” attacks, which involve overwhelming a model and associated network infrastructure with excessive queries to limit application service to legitimate users.
Membership inference attacks can allow adversaries to determine whether a patient’s data were used in model training, while instruction-tuning attacks, such as jailbreaking, enable the model to bypass safety protocols and generate harmful outputs.
Moreover, well-intentioned users, such as patients, may access their nonanonymized medical records and utilize them on LLM platforms, which can inadvertently jeopardize their data privacy.
“These risks highlight not only the importance of implementing mitigation strategies but also educating all stakeholders involved before the deployment of such systems,” the group wrote.
To securely implement LLMs, healthcare institutions should adopt a multilayered security approach, starting with first deploying models in secure, isolated environments, such as a “sandbox,” for instance. In addition, model interactions should be continuously monitored, encrypted, and controlled through authentication mechanisms such as multifactor identification and role-based access control, the group wrote.
Furthermore, addressing the challenges of bias and hallucinations by LLMs is essential, the group added. Hallucinations are incorrect or fabricated responses from LLMs that can lead to incorrect diagnoses, inappropriate treatment recommendations, or misinformation.
“While techniques such as fine-tuning, human feedback, and continuous monitoring can help reduce the incidence of hallucinations, this remains a critical area for improvement as LLMs continue to evolve,” the researchers wrote.
With recent studies highlighting how LLMs can perpetuate inequalities, such as bias against underserved racial groups, ongoing efforts to curate diverse and representative datasets are also essential. These biases are not inherent to the models themselves but rather stem from biases present in the datasets on which they are trained, the study authors explained.
Bias detection and mitigation algorithms can be implemented during both training and inference stages when developing LLMs and can help ensure more equitable healthcare outcomes, they wrote.
Ultimately, ongoing training about cybersecurity is key, D’Antonoli and colleagues noted. Just as radiologists undergo regular radiation protection training in radiology, hospitals should implement routine cybersecurity training to keep everyone informed and prepared, they wrote.
“By promoting the safe, transparent, and fair use of LLMs, all stakeholders in healthcare can benefit from their transformative potential while minimizing risks to patient safety and data privacy,” the group concluded.
The full study is available here.