New framework could support ethical data sharing for AI

2019 03 05 00 34 7710 Ethics Text 20190305003726

Who should profit from a patient's clinical data? No one, as it should only be utilized for the greater good, according to a proposed framework for ethical use and sharing of clinical imaging data for artificial intelligence (AI) published online March 24 in Radiology.

In their article, a team of researchers from Stanford University led by Dr. David Larson detailed how the principles of their proposed ethical framework could guide the development and use of AI algorithms that can learn from and analyze clinical data acquired from medical care.

Under the proposed framework, neither patients nor providers would have full rights to control and profit from the data. Developers would also only be allowed to profit from the knowledge and tools that are developed from the data -- not from the data themselves, according to the authors.

Sharing clinical data with outside entities to create generalizable AI models would be consistent with the ethical framework under four conditions:

  1. Individual privacy is carefully safeguarded at all times.
  2. The receiving organization willingly accepts the same fiduciary responsibilities of data stewardship as the original provider organization. This includes agreeing that no attempt will be made to reidentify any individual from the data.
  3. The sharing and receiving organizations both strictly adhere to an agreement specifying the purposes for which the data will be used.
  4. The receiving organization agrees to not share the data further without the consent of the original provider organization.

If the clinical data are shared widely, those making use of it would assume those four obligations. They should also identify themselves, agree to all terms and conditions set forth by the providing entity, and agree to respond to communications from the originating institution for as long as they retain a copy of the data in their possession, according to the researchers.

The same ethical principles would apply for federated learning, including considerations for exclusivity.

The researchers noted that individual privacy protection is of paramount importance for the secondary use of clinical data. Organizations should therefore utilize a level of reliability of deidentification efforts that correspond to the trustworthiness and integrity of the entities with whom the data are shared, according to the authors.

As for patient consent for secondary use of the clinical data, the researchers believe that additional patient consent is not required for research or algorithm development, under the following conditions:

  1. Individual privacy is carefully safeguarded.
  2. Data are aggregated when used for research and development.
  3. Institutional oversight mechanisms are in place to ensure that clinical data are used appropriately.
  4. Patients are made aware of how their data may be used when they consent for care, through a public website, or by other means convenient to patients.

In an accompanying editorial, Elizabeth Krupinski, PhD, of Emory University in Atlanta said that she really likes the proposed framework and the idea that all should feel an obligation to share our health data to improve future patient care and to benefit society.

"Getting the message out is going to be relatively easy as well, and I would like to think that buy-in will not be that difficult either for the majority of stakeholders," she wrote. "The hard part is going to be figuring out how to deal with the minority of stakeholders who disagree or simply ignore the ethical considerations."

Page 1 of 364
Next Page