Mayo’s John Halamka on boosting confidence in AI to expand application; speed utilization

As IT continues to advance ease of development for algorithms, health system leadership, clinicians and administrators are starting to understand AI’s larger role in care delivery, cost reduction and equity of care. But, these stakeholders need to see AI solutions not just the technology, and they need assurances that the science can be trusted.


Artificial intelligence – once overhyped, then beleaguered by mistrust – is beginning to deliver on its long-awaited promise, says longtime healthcare IT leader John Halamka, MD.

Mayo Clinic – where Halamka has served as president of the Mayo Clinic Platform for the last two years – is rolling out algorithms that clinicians are finding useful in improving their clinical practice, which is the key to winning support.


“AI has a credibility problem right now. It is thought of as magic, when it is in fact math.”


“There are a few ways to motivate a clinician, and certainly one is better quality of practice life. And the other would be avoiding quality issues or malpractice assertions,” he says. “So what we’ve seen in the first 60 or so algorithms that we’ve produced is that, if you integrate them into workflow and they’re bringing some real value, adoption is not an issue at all.”

But AI’s growth trajectory – its meanderings along the Gartner “Hype Cycle” – has resulted in distrust. And now because of increased understanding about potential shortcomings in algorithm development, the healthcare industry needs to bolster confidence in AI by ensuring that it meets standards of scientific rigor, Halamka says.

Science, not magic

Several years ago, AI’s potential was overpromised. And while that initial hype eventually subsided, it’s begun to roar back this year, Halamka says. For example, AI’s capabilities were widely touted at this spring’s two large industry events, HIMSS22 and ViVE. Misperceptions about AI persist, and questions are growing about whether solutions are grounded in dependable science.

“AI has a credibility problem right now,” Halamka contends. “It is thought of as magic, when it is in fact math.”

Healthcare has emerged from the COVID-19 pandemic with a demonstrated ability to evolve to meet changing demands. “It is a unique time in history, and Mayo is probably uniquely positioned to take some risks to experiment with new possibilities and educate the world,” he says.

Making the shift to Mayo

Halamka came to Mayo after 25 years’ work in the Boston area, serving as CIO at Beth Israel Deaconess Medical Center and Harvard Medical School. The 60-year-old practicing emergency medicine physician was drawn by Mayo’s initiative to create a platform to investigate ways to transform medicine.

Mayo is collaborating with Google on several initiatives to boost the use of artificial intelligence in healthcare. Most recently, it was reported the two entities are working together to track and analyze language in patients’ EHRs.

“Mayo has this notion that if we’re going to look at the next decade, it will be increasingly virtual and increasingly digital, and we’ll have to democratize access to specialty care and conceive of new care models,” he says. “The technology, the policy and the cultural changes necessary to do this will be at very large scale.”

This also will involve massively scaling AI models and validating them. For example, Mayo conducted a randomized clinical trial involving 60,000 patients to develop an algorithm that enables more precise diagnosis of congestive heart failure.

“We found that clinicians were 30 percent better at diagnosing CHF using the algorithm,” Halamka says. Clinicians are happy to accept that proven gain in effectiveness.

Science implies rigor

Algorithm development through AI is not just one-and-done. That’s led to some distrust by physicians and has added to the overall “black box” mystique of the technology, leading some clinicians to question the accuracy of AI, especially when algorithms are applied to wide populations.

“The problem is you can’t just develop and deploy (AI),” Halamka contends. “You need to monitor and refine.”

There’s growing awareness of potential challenges for the use of AI in healthcare, especially questions about the representative nature of patient data that AI uses to draw algorithmic conclusions.

Also, machine learning, which is a cornerstone of AI, has potential to support clinicians in medical imaging. But challenges, such as the data on which findings are made, must be overcome to provide real benefit to clinicians.

Others have claimed that AI failed to live up to its potential in providing support during the pandemic.

Unless they carefully evaluate the accuracy, safety and bias of AI algorithms, developers risk repeating mistakes similar to those documented in a recent study where one outcome — using health costs as a proxy for health needs — during algorithm development led to major racial bias.

AI still ‘immature’

Halamka says the capacity of technology has expanded exponentially since he completed postdoctoral work at MIT in 1997. But the use of AI in healthcare is still “immature,” he says. “And because we’re immature, we haven’t standardized on process and labeling.”

To help deal with that issue, the recently formed Coalition for Health AI is working to ensure progress in hardening the rigor behind AI efforts, Halamka notes. He serves on the steering committee for the coalition, along with Such Saria at Johns Hopkins Medicine, Nigam Shah at Stanford Health Care and Brian Anderson of MITRE Corp.

Other participants include key leaders from Change Healthcare, Duke University, Google, Microsoft, the University of California Berkeley and San Francisco. Also, the Department of Health and Human Services, and the Food and Drug Administration are serving as observers of the group.

“Our role in 2022 is to try to sift through the work that’s been done in the past and come up with a set of guardrails and guidelines that will be open source and freely available to the entire industry,” Halamka says. The group hopes to help determine, for example, “if you’re going to present an algorithm and workflow, here are the things you should wrap it with so that it’s going to be useful.”

Describing the guardrails

An article co-authored by Halamka, Saria and Shah highlights some guardrails that can help ensure the validity of AI-developed research by maintaining safety and accuracy and eliminating bias. These include:

  • Using algorithm “labeling” that describes the data used for its development, its usefulness and limitations;
  • Conducting ongoing testing and monitoring of algorithm performance;
  • Developing best practices and approaches for appropriate clinical use and understanding clinical contexts and goals to better assess risks and adapt to local variations.

The concerns of providers as well as patients, payers, pharmaceutical companies and policymakers must be addressed before AI can make positive contributions in healthcare, Halamka contends.

Some organizations have been aiming to bring structure to AI development in the sector, but that hasn’t been enough to instill needed confidence, he adds. “We need implementation guides, and that’s what we’re working on next.”

The Coalition for Health AI plans to conduct virtual meetings from May through July in advance of an in-person meeting in August to discuss its efforts, including development of such guides.

Structure can be applied to already developed algorithms, Halamka says. For example, Mayo tests algorithm performance on a de-identified secure dataset of 10 million patients. “It’s that kind of validation laboratory that I can imagine is going to be an essential part of lifecycle management going forward,” he says.

AI has been trending along Gartner’s Hype Cycle for a few years now. Having emerged from the “trough of disillusionment,” it’s emerging to provide real benefits. But Halamka says AI needs to enter yet another stage before it can truly succeed in healthcare. “They forgot the most important stage – the mire of maintenance,” he says. “The hype cycle eventually ends, but AI products continue on forever. And we must continuously monitor these validated AI algorithms so they can be tuned to data sets, as well as locally tuned to a patient population.”


Watch Fred's full interview with John Halamka

Related content

Self-driving cars and hospital-rounding robot residents

4 strategies for engaging clinicians in AI

AI and other tech show promise in democratizing healthcare

AI shows promise for better outcomes at Michigan Medicine

AI in Practice initiative shows potential of advanced computing

More for you

Loading data for hdm_tax_topic #better-outcomes...