An npj Digital Medicine commentary published on March 15 contends that privacy risks with foundation models (FMs) used in healthcare can be mitigated through integrating safeguards into FM design, along with regulations on use.
The commentary, written by a group headed by Rui Santos, PhD, of the Spross Research Institute in Zurich, expressed “cautious optimism” on protecting privacy with the use of FMs. While FMs may retain biometric and demographic identification, the authors contended, the ability to reidentify patients through FMs is mitigated by several factors -- and implementing safeguards in both FM architecture and policy would mitigate the risk even further.
In a 2025 study, researchers raised concern about patient reidentification via FMs in which FMs trained on retinal images were able to extract features, enabling accurate prediction of demographic attributes with up to 94% accuracy, and, separately, could enable patient reidentification from images.
Demographic information itself is not identifying information, the commentary authors noted, explaining that an FM being able to predict someone's age, sex, or ethnicity does not mean it can identify individuals, especially in large cohorts.
“In most clinical or research contexts, such linkage is unlikely to occur without deliberate intent and substantial auxiliary data,” they wrote, rendering it unlikely that such reidentification could happen either accidentally or incidentally.
Nevertheless, they acknowledged that such risks remain; moreover, current governance may lag behind the development of AI-based technology. The large scale and transferability of FMs could amplify privacy risks, they noted, as well as lead to wider impact in the event of a data breach.
To address these concerns, the authors have proposed a three-part strategy:
- Incorporate technical safeguards into the design of models, such as FM-tailored differential privacy mechanisms and federated pretraining frameworks adapted for heterogeneous data sources.
- Implement model governance across institutional and national borders, with standardized privacy audits and transparent risk reporting.
- Update regulatory frameworks, such as the General Data Protection Regulation (GDPR) and the EU AI Act, and legislation such as the Health Insurance Portability and Accountability Act (HIPAA), to reflect risks inherent in FM models.
Overemphasizing the risks of FMs without constructing a framework for managing those risks could result in "an atmosphere of reflexive distrust" of technologies with formidable potential for accelerating medical research, improving patient outcomes, and reducing healthcare disparities, the authors concluded.
Read the commentary here.



















