Health data restrictions plague medtech AI developers

Liz Carey Feature Writer Smg 2023 Headshot

Medtech industry and health leaders from big tech introduced a new AI Policy Roadmap to the U.S. Congress and federal agencies this week in hopes of steering policy toward fewer data restrictions and other goals. 

The Advanced Medical Technology Association (AdvaMed)'s new Digital Health Tech division released the 19-page document April 22. The guide addresses common concerns of companies with devices and software products in the health and medical AI pipeline or positioned for use along the continuum of care. 

On a call, Digital Health Tech division Executive Director Shaye Mandle, Venk Varadan of remote diagnostics developer Nanowear, and Robert Cohen of medtech firm Stryker tied the AI Policy Roadmap to the new division's other policy areas, which include data access, interoperability, and reimbursement for digital health technologies. 

Shaye Mandle, Executive Director of AdvaMed's Digital Health Tech division.Shaye Mandle, Executive Director of AdvaMed's Digital Health Tech division.AdvaMed

"Radiology devices account for a substantial majority of the AI-enabled devices authorized by [the U.S. Food and Drug Administration (FDA)] to date," Mandle told AuntMinnie. "The experience imaging device manufacturers have in AI-enabled medical devices makes their experiences critical to policymaking in this space. All medical device manufacturers have a long history of assessing risk and benefit to the patients -- and that perspective is unique in the AI product space."

The roadmap provides 15 AI policy recommendations around the areas of greatest industry-side concern: what slows getting products to market, needs clarity, or introduces appreciable risk into the future. It is meant to be used in partnership with the FDA, legislators, and the U.S. Centers for Medicare and Medicaid Services (CMS), according to the group.

"We want the FDA to be the sole, lead regulator responsible for overseeing the safety and effectiveness of the AI-enabled devices we work on," Cohen said on the call. "We know, in a lot of the areas of medtech, what it is that AI can benefit, and actually go after true clinical issues, or quality of care, or patient safety, or efficiency, or economics."

Cohen and Varadan both acknowledged an apparent slowdown in FDA de novo approvals that may be occurring now, saying, "120 days ago was a lot different," but there has been no talk among Digital Health Tech board members of remarkable delays on time to market. 

Key to the rest of the discussion were the FDA's AI regulatory framework, a global harmonized approach to regulatory oversight of AI-enabled devices, and data access, use, and privacy. 

Those close to the issues anticipate an update to the Health Insurance Portability and Accountability Act (HIPAA) for the AI era (to allow data sharing for training, testing, validating, and retraining AI models, possibly), while addressing data privacy and privacy rights. 

Robert Cohen, StrykerRobert Cohen, StrykerAdvaMed

"Different hospitals and places that we want to access data -- legislation isn't as clear as to who owns the data and who can provide the data," Cohen said on the call. "And then, essentially, what we can do with the data all the way through validation verification and, if we think of gen-AI, continuous data feeding." 

One of the challenges to moving faster is access to disparate datasets for each patient. Data, not just one dataset, is dispersed among the physician's office, imaging center, hospital, and beyond.

"The data is stratified in very different ways," Cohen explained. "If you think about the framework and regulations on the data, will I be able, if I did full deidentification, to actually to go out and acquire the data on the continuum of care and stitch that whole patient journey together, that's the kind of regulation and framework we're looking to evolve. Then we could say this definitive thing actually improved the outcomes, and we have the proof of improved outcomes."

Reimbursement

Venk Varadan, NanowearVenk Varadan, NanowearAdvaMed

How the data is used is the common denominator, added Varadan, who later prioritized reimbursement policy to start with a risk-based approach. 

"Workflow optimization is a great place to start," Varadan said. "Then we start talking about diagnosis, and then we can start talking about therapeutics. 

"The level of how we need to assess those are in very different places," Varadan noted, "but we will take learnings from that stepwise approach on how this should all be regulated." 

What is an AI medical device? 

However, what is an AI medical device? What does an AI medical device look like? There is no consensus or definition currently in the industry or internationally in the global medtech landscape. 

"There is not right now," Cohen confirmed. "The problem is classification systems don't sync. Is it AI by itself? Or is it AI in an application that's doing something -- 'application indication'? Or is it AI that's put on a medical device that exists, that already is software driven? If it's already on existing software, you don't have correlatable classification codes throughout the world. I'm not sure we should expect to see a lot of changes there.

"We should expect, though, consistency of how to validate models," Cohen continued. "We should look at the process of how to bring a model before regulatory, so at least these protocols that we are working through -- through the development cycle -- are consistent, and we don't have to do one validation technique for a notified body and another one for the FDA. 

"But the definitive approval on classifications and risk assessment and whatnot, that harmonization's got to go first and that's going to be some time," he said. 

Toward harmonization 

Authorities outside the U.S. are already working together toward harmonization, Mandle noted. The FDA, Health Canada, and the U.K. Medicines and Healthcare products Regulatory Agency (MHRA) have published joint documents on AI: Transparency for Machine Learning-Enabled Medical Devices: Guiding Principles, and Good Machine Learning Practice for Medical Device Development: Guiding Principles, for example.

Also, the FDA and MHRA co-chair the International Medical Device Regulators Forum (IMDRF) workgroup on AI/ML-Enabled Devices (which includes other regulators as well). 

The goal is for FDA to lead the way forward, according to Mandle. 

"[Imaging data] exemplifies why harmonized requirements -- data requirements, regulatory requirements, legal requirements -- are critical to strong AI models," Mandle told AuntMinnie. "Imaging data benefits from an international standard, DICOM, which allows medical imaging devices and PACS systems from different manufacturers to generate and read images consistently. The standard -- developed back in 1985 -- has really allowed AI-enabled tools to flourish in radiology.

"Since data is the foundation of AI-enabled devices, the standardized data formats allow AI models access to more and more consistent data," Mandle said, adding that differing requirements ultimately lead to different data. "Does the data include patient information? How much? What characteristics? What can it be used for? That makes robust AI models more difficult and more expensive to produce."

Regulatory roundup

AdvaMed's roadmap raises important points about health data regulation, pre- and postmarket regulations and monitoring requirements, quality systems regulations, HIPAA, Software-as-a-Service (SaaS) payment policy regulation, regulation for new digital therapeutics called Digital Mental Health Treatment devices, and algorithm-based healthcare services (ABHS).

The guide also reinforced the trade group's stance against third-party "AI assurance labs." The FDA and other healthcare stakeholders are exploring the creation of these labs to support AI lifecycle management in medical devices, with a focus on ongoing performance monitoring. The scope of these third-party labs, however, is unclear and evolving, according to the authors.

"Mandating manufacturers to share proprietary information about AI-enabled devices with external labs raises confidentiality, intellectual property, and security issues," the authors wrote, also citing redundancy with existing FDA oversight and potential transparency issues with labs' testing metrics and methodologies.

The future of AI applications is mostly to be determined, AdvaMed president and CEO Scott Whitaker said in an announcement. 

"We’re in an era of discovery,” Whitaker added. “While none of us can anticipate all the game-changing applications of AI in medtech to come, we can confidently predict that transformation will continue at a rapid pace -- and the policy environment absolutely must keep up." 

Read the full roadmap and all 15 recommendations here.

Page 1 of 376
Next Page