2025 forecast: How will AI, regulators and the government intersect?

This year may bring major changes in how artificial intelligence shapes the healthcare industry—and vice versa—and tempering hype for the technology with ensuring patient safety will be essential.

While the FDA has already handed down green lights to more than 1,000 different medical devices and pieces of software powered by AI and machine learning at last count, none of them so far have the ability to update their own programming and keep pace with new data. Their algorithms are locked once they head to the clinic, and they are designed to offer the same conclusion each time they are faced with the same input.

But, urged along by the rise of generative AI, software that can improve its performance through real-world feedback may finally be on the horizon in 2025—and both the agency and the industry have recently made strides in preparing for future applications.

“We're at this really interesting inflection point when it comes to the AI space,” said Mitesh Rao, M.D., co-founder and CEO of the real-world data company OMNY Health. “We're in this moment where a lot of the critical components for the next phase of sustainable use of AI are just on the horizon of everyone's prioritization.”

“These are the things that we can't ignore—call it the unsexy parts of AI—where we're starting to look at things like security and compliance, and the elements that would help serve as the guardrails,” Rao said in an interview. “That's a really important piece, because that means we're moving past the hype cycle and now starting to think about brass tacks—and about what we actually need to do to make this a sustainable technology going forward.”

Last December, the FDA published its final guidance on predetermined change control plans, or PCCPs, with recommendations to AI developers on how they can outline planned future updates to their software after receiving a regulatory green light without having to resubmit the product for agency review each time.

PCCPs include protocols for modifying the program and an assessment of any changes’ potential impacts. They should also include limits on how often the software may update itself along with plans to mitigate the development of bias in the results.

Meanwhile, the reelection of Donald Trump and his picks for the upcoming administration—including tech investor David Sacks as the White House’s first “AI and Crypto czar”—is sending a message to the industry about the government’s potential approach to the sector.

“I think what it's saying is, ‘Let's get some people who understand the issues around AI involved, but who are also outside of the normal regulatory bodies,’” Brigham Hyde, Ph.D., CEO of Atropos Health, said in an interview. “It's a way to educate [the government] and influence on AI without taking steps like starting a new federal department that regulates AI. I think that's a thoughtful approach.”

After helping launch PayPal, Sacks later co-founded Craft Ventures, with a focus on funding business-to-business developers and software-as-a-service startups. “And frankly, I think the crypto part is probably going to be more of an early focus than the AI part,” Hyde added.

Still, regulatory changes are expected in the near future, as the law’s definition of a medical device has been slow to keep pace with recent leaps in medical technology. The head of FDA’s Digital Center of Excellence, Troy Tazbaz, said as much last October during a panel discussion on the subject at AdvaMed’s MedTech Conference in Toronto.

“This session is about regulation that evolves, which is kind of an interesting way of putting it, because regulation unfortunately does not evolve,” Tazbaz said.

“The regulation that we are using has been around since 1976—and essentially it was written for a very, very different type of product than what we're trying to apply it to,” he added. “So a lot of our creativity has been asking how, with our current statutory authority, can we push the limits of that?”

According to Hyde, regulatory changes to help unlock some of the promise of AI in healthcare are a tad overdue. “Even before the election, regardless of the outcome, we expected some regulation in this space within the next year or 18 months. And I think that's probably a good thing; my hope is that they're thoughtful about it,” he said.

In Congress, December also saw the House’s bipartisan AI Task Force release its 273-page report (PDF), with recommendations for responsible advancements in the field.

“It is our hope that this report will inform the Congress and the American people on the advantages, complexities, and risks of artificial intelligence,” the task force’s chairman, Rep. Jay Obernolte, R-California, said in a statement. “The report details a roadmap for Congress to follow to both safeguard consumers and foster continued U.S. investment and innovation in AI.”

That includes working to ensure that AI’s use in healthcare is transparent as well as safe and effective. In addition, the report recommends studies into where liability lies as AI tools become more commonplace in patient care and that tools designed to streamline clinicians’ work should be properly reimbursed.

“I think we're at a really exciting phase,” said Rao. “We’re seeing both regulators paying attention, but then also the broader political leadership is saying that, 'Hey, we need dedicated folks who are focused on building out and planning through these next phases.' To me that’s an exciting piece. It’s progress.”

“Focus on data quality, data security, data compliance and patient safety—focus on those four pillars, and we'll be good,” Rao said.