4 strategies for engaging clinicians in AI

What does it take to ensure models work from the perspective of clinicians? AI experts describe four steps to help get our heads out of the sand.


John Lee, MD, vice president and CMIO at Allegheny Health Network, says many clinicians have one of three attitudes about artificial intelligence:

  • “It’s magic and you should trust it.”
  •  “It’s voodoo and you should distrust it.”
  •  “You should stick you head in the sand and not think about it.”

Lee, however, sees machine-learning models as an extension of the risk-scoring tools and mnemonics that physicians have used for decades to help them in clinical decision-making.

One example is the HEART score, which helps emergency department physicians determine if patients presenting with chest pain need aggressive management. The HEART score looks at a small number of criteria, including age and electrocardiogram findings. 

By comparison, machine learning enables many more diagnostic criteria to be considered in a rapid fashion. “We’re able to add hundreds to thousands of additional data features and make those scoring tools more precise without adding to the physician’s cognitive load,” Lee says.

Dr John Lee is CMIO and a member of the Executive Leadership Team at Allegheny Health Network

The CMIO believes the best way to get clinicians to trust and use AI is to build and deploy models that work well. “If it works, people will just use it,” he says.

David Vawdrey, chief data and informatics officer at Geisinger Health, offers a similar perspective.

“Doctors, nurses and others working in clinical settings are just like other human beings. If you provide a solution that is superior to the current state, they will flock to it,” he says. “They’re willing to embrace anything that can save them a bit of time and, more importantly, make it easier to deliver the best and safest care to their patients.”

What does it take to ensure models “work” from the perspective of clinicians? AI experts point to four key strategies:

  • Create models that are useful.
  • Make the models explainable.
  • Involve clinicians in development.
  • Use AI to reduce the work burden.

Create models that are useful

Machine learning models need to give clinicians information that helps them take an action or change their behavior.


‘We’re OK with AI. But what we want is for AI to replace the dumb stuff we have to do every day’

anonymous physician executive


“If I look at a number and I don’t do anything about it or understand what it means, where it came from or how it should inform change, then we’ve failed our caregivers,” says Greg Nelson, assistant vice president of analytics services at Intermountain Healthcare.

Vawdrey shares an example of a failed AI model that identified patients at high risk for a particular health problem and recommended several interventions for nurse case managers. But the managers found that the recommendations weren’t helpful. As one care manager explained, according to Vawdrey, “We already do all those interventions for every patient on our list.”

An essential step in implementing AI, Vawdrey stresses, is to find out if the information that AI provides is actually valuable and would result in a desirable outcome if it helped to change clinicians’ behaviors.

Make models explainable

Explaining how models work also can help win buy-in from clinicians.

The Centers for Medicare & Medicaid Services’ Artificial Intelligence Health Outcomes Challenge,  which attracted 300 entrants, focused on developing easily explainable machine learning tools that predicted unplanned hospital and skilled nursing facility admissions as well as adverse events.

AI Health Outcomes Challenge winner press release

The winner of the challenge, ClosedLoop, showed clinicians how its machine learning algorithm arrived at predictions, says Carol McCall, the company’s chief health analytics officer. “To be trustworthy, as our math teachers always told us, you have to show your work,” she says.

At Duke Health, data scientists have addressed explainability – or what some call the black box problem – by developing a “model fact sheet” for machine learning algorithms.

“It’s like a nutrition label,” says Suresh Balu, director, Duke Institute for Health Innovation. The label describes what data is used in the model and explains how and when the model should be used. To address any model bias, the fact sheet also describes how the model performs in specific patient cohorts, such as among patients of various races.

All this information is critical for physicians to decide when to use the model’s predictions, Balu explains. “For instance, if we develop a model to be used in the ED, it would not be appropriate to use that model in the ICU,” he says. “Or if the model is developed only for the adult population, you can’t use it in pediatric patients.”

Explainability is not only about raising clinician trust in AI models, but also about giving them information upon which they can base their actions. “Predictions never saved anybody,” McCall says. “The more actionable I can make the predictions, the more you can do for your patients.”

Geisinger Health, which was the runner-up in the CMS challenge, aims to create AI models that point clinicians to modifiable risk factors. For instance, if a patient has a high risk of being admitted to the hospital, the model would not cite the patient’s advanced age as an actionable risk factor but might cite high blood pressure.

“If we can get a patient’s blood pressure under control or diabetes under control, we may be able to take that patient from a high-risk to a lower-risk category,” Vawdrey says.

Involve clinicians in development

Duke Health

Duke Health involves front-line clinicians, as well as unit/department leaders, in AI model development, from refining the problem through devising the solution.

“By the time the model is shared with staff for evaluation purposes, there are no surprises,” Balu says.

Data scientists at Duke also work with clinicians to incorporate the AI model into the clinical workflow, addressing issues such as how the model will be used; who will be notified when a model makes a prediction or recommendation; what the time cost is to the specific user of the model; and how the model’s performance will be evaluated.

Use AI to reduce the work burden

Another way to engage clinicians is to find ways to use AI technologies to eliminate or reduce the time they have to spend on mundane tasks.

For example, speech-recognition technology may reduce the documentation burden, and robotic process automation might take over routine tasks.

“A physician executive told me, ‘We’re OK with AI. But what we want is for AI to replace the dumb stuff we have to do every day,’” Nelson says.


Stories in this series

Self-driving cars and hospital-rounding robot residents

Creating an AI ecosystem


Related content

AI and other tech show promise in democratizing healthcare

CIOs step into new roles to assess a slew of digital tech 

AI shows promise for better outcomes at Michigan Medicine

IBM sells its stake in Watson Health to investment firm

AI in Practice initiative shows potential of advanced computing

Rady Children’s fast genomic sequencing initiative gains steam

More for you

Loading data for hdm_tax_topic #better-outcomes...