Self-driving cars and hospital-rounding robot residents
While this is the vision of Artificial Intelligence among much of the public, applications and near-term benefits are not all as blatantly ‘The Jetsons.’ Artificial Intelligence has great potential for improving patient outcomes and enhancing health equity, but a variety of challenges are slowing adoption.
The Promise and Reality of Artificial Intelligence in Healthcare
Geisinger Health’s David Vawdrey, PhD, compares the progress of artificial intelligence in healthcare to that of self-driving cars.
In 2016, former U.S. Secretary of Transportation Anthony Foxx foresaw consumers hailing autonomous vehicles on a large scale by 2021. Also in 2016, prominent data scientist Geoffrey Hinton questioned whether the ongoing training of radiologists was necessary because “within five years, deep learning is going to do better than radiologists.”
But 2021 came and went with neither prediction coming close to being realized. Yet progress is happening on both fronts.
“We’re a ways off from where I’d be comfortable pushing an autopilot button on a car in a complex driving environment,” says Vawdrey, chief data and informatics officer at Geisinger. “But I love lane-keep assist and adaptive cruise control. I love that the car can detect a possible collision and emergently apply the brakes. All that’s artificial intelligence. It doesn’t have to be all or nothing.”
Both, transportation and healthcare, are making incremental progress.
According to an August 2021 survey by Optum of 500 healthcare executives, 98 percent of organizations have an AI strategy in place. However, only 45 percent have implemented a plan. Even organizations at the forefront of AI are actually using a relatively limited number of AI tools.
"Google, Amazon and Facebook have all had massive mistakes. But healthcare organizations have a different kind of trust relationship with our patients and our communities.”
Greg Nelson, Intermountain Healthcare
“A lot of these models are very narrow,” says Bradley Erickson, MD, director of the AI laboratory at Mayo Clinic. “For example, an algorithm will find intracranial hemorrhages on CTs but not on MRIs, and it won’t find brain tumors, strokes or infections.”
The fact that lives are on the line in healthcare serves as a natural brake on attempts to rush AI innovation.
“We can’t afford to have even one mistake,” says Greg Nelson, assistant vice president of analytics services at Intermountain Healthcare. “That’s part of why you see a slow and methodical strategy for how we apply some of this technology. Google, Amazon and Facebook have all had massive mistakes. But healthcare organizations have a different kind of trust relationship with our patients and our communities.”
Going beyond complying with the Hippocratic Oath (first, do no harm), healthcare organizations investing millions in AI are betting that the technology can help them attain the Quintuple Aim. The expanded version of the Institute for Healthcare Improvement’s Triple Aim focuses on improving health equity and workforce safety and well-being in addition to reducing costs and enhancing population health and the care experience.
That’s a tall order for a nascent technology that faces complex challenges, ranging from algorithmic bias and data privacy issues to clinician skepticism. Yet industry insiders are optimistic that AI will continue to advance, eventually becoming an integral part of healthcare delivery.
Augmenting human intelligence
Definitions of AI revolve around computers that can assume or assist with cognitive tasks that humans perform. AI encompasses several technologies, including machine learning, robotic process automation and natural language processing.
Intermountain Healthcare has rolled out many deployments of machine learning, including models that spot potential pneumonia on X-rays and predict a patient’s risk for Clostridium difficile infection, Nelson notes.
“We’re not looking to replace the caregiver decision-making process, but augmenting that process, helping staff sift through signals and noise in large data sets so they can apply key information to the patient in front of them,” he explains.
With the digitization of healthcare, the amount of data to sort through has grown exponentially.
“People used to use the analogy of finding needles in haystacks,” says Carol McCall, chief health analytics officer at ClosedLoop, a predictive AI company. “Now all of a sudden, there are too many haystacks, and clinicians can’t possibly look through them all. What they need is a magnet that hovers over all the haystacks and brings them high-priority information, giving them visibility to something that may not have been visible before.”
For example, ClosedLoop has developed a drug burden index – a machine learning model that culls through a patient’s medication list, medical history (for example, looking for risk of frailty or dementia), pharmacological research about various drugs and other data and then warns clinicians of potential adverse drug reactions. The model, which is not yet fully deployed, identifies specific medications that may be problematic for a particular patient so physicians can de-prescribe or lower dosages.
Start with simple tasks
In the short-run, Mayo’s Erickson believes robotic process automation, which takes over simple, repetitive tasks, will have a greater impact in healthcare than “sexy AI tools,” such as machine learning predictive algorithms.
In addition to automating back-office tasks, such as insurance eligibility verification and inventory management, process automation could assist the front office, he says.
“There are many handoffs and care guidelines in clinical care,” Erickson says. “Process automation can help ensure that best practices are reliably followed.”
As an example, he points to a common bottleneck in radiology: patients arriving for imaging scans with IV contrast who have not had a required creatinine blood test. An automated tool could spot this problem prior to the appointment and notify the referring physician to order the test.
Process automation also can be used to collect and retrieve data that humans and machine learning tools need to make clinical decisions, ranging from precise measurements of tumors to various health statistics (such as heart rates) from remote patient monitoring devices.
Ultimately, process automation could be programmed to execute machine learning models at certain points in the care process to make specific diagnostic predictions or treatment recommendations. “As sophisticated AI tools become more trusted, then they will become one more step in a clinical process that process automation calls upon,” Erickson says.
Understanding the challenges
Intermountain’s Nelson foresees AI in healthcare unfolding like a “tapestry of modern, intelligent technologies that we knit together to solve an entire system of problems for humans.”
Achieving this vision on a large scale, however, will require addressing numerous challenges.
“Deploying an AI tool is easy to do on one nursing unit. It’s a little harder to do across all units in a hospital. And it’s really hard to do in 3,000 hospitals across the United States.”
David Vawdrey, Geisinger Health
One fundamental issue is data quality. “A lot of data in healthcare is human driven … which can be very imprecise due to vocabulary gaps, partial information or correlated medical concepts,” says PKS Prakash, a principal at ZS, a global firm that specializes in AI and data analytics. Prakash leads the company’s AI research lab.
Asking clinicians to be more precise while documenting, however, may worsen clinician workload and burnout issues.
“Requiring clinicians to do data curation at the point of care can be very cumbersome and has led to physicians becoming the most expensive clerical staff in healthcare,” says John Lee, MD, senior vice president and CMIO at Allegheny Health Network.
AI technologies such as voice-recognition software, natural language processing and process automation, may eventually help with data quality issues. But more data sharing among healthcare entities and other organizations is also needed.
“We need to create data sets that are more semantically rich and standardized,” Lee says. “If we can accomplish that, then sharing will occur almost organically because meaningful data from one source can mingle easily with other sources and do so with meaning.”
Eliminating bias in machine learning models is a significant challenge.
Algorithms are trained based on historical data, which captures human observations and biases about race, gender, age and other demographic factors. As a result, algorithms adopt the biases found in data, especially if the algorithms are not properly designed.
For example, a 2019 paper in the journal Science described an algorithm designed to identify patients who would benefit from intensive care management. Race was intentionally not included as a variable in the model, yet Black patients were less likely to be identified as needing care management services, even when they were sicker than white patients.
A design issue was to blame for the bias. The algorithm predicted patients’ future medical costs based on their historical costs. But historical costs were a poor proxy for medical need, the paper notes. Black patients in this population had accessed fewer health services than white patients. Reformulating the algorithm to look at illness level versus costs eliminated this bias.
In addition to smart algorithmic design, bias prevention requires close monitoring and testing of every model, says Suresh Balu, director of the Duke Institute for Health Innovation.
“We look at how the model performs for various classes of biases,” he says. “For example, how does it perform with respect to males and females and with respect to race? If a specific data element is causing a bias, then we try to eliminate that element and then redevelop the model.”
Data privacy issues also can hinder efforts to implement AI technologies.
“The more you lock things down by de-identifying the data, the less useful that data becomes,” says Allegheny’s Lee. Addressing the data privacy dilemma will require striking “the best balance” between functional output and protection of personal health information, he asserts.
Privacy issues also potentially curtail the ability to build smarter, more robust machine learning models.
“In many of these models, you don’t want to train them in a narrow data set,” says Arun Shastri, a principal at ZS who leads the firm’s global AI practice. “You want to train them across lots of different data. That’s how you make these models more sophisticated over time.”
To attain this goal, some organizations are experimenting with federated learning, a collaborative method for training AI models across sites while maintaining patient privacy. “The data doesn’t come to a centralized location,” Prakash of ZS explains. “Instead, we are training models at decentralized devices or locations.”
For example, 20 hospitals recently collaborated to train an AI model to predict supplemental oxygen needs in COVID-19 patients. By using a federated learning approach, the hospitals increased generalizability of the model by 38 percent, according to a study published in Nature Medicine.
The need for regular monitoring
Experts stress that AI deployment is not a one-and-done deal. AI models need to be regularly monitored to ensure continued accuracy.
One common problem is model drift, which is when a model’s accuracy degrades over time, in part because of the constantly changing nature of data.
“We have a mechanism in place that allows us to ensure a model’s predictive accuracy is as robust as the day it rolled off the assembly line,” Intermountain’s Nelson says. “If it’s not, we’ll prioritize that model for evaluation. Sometimes that will require retuning the model and sometimes we have to retire the model and develop a new strategy.”
A key part of AI implementation is getting staff to trust and act on insights provided by models. “Most artificial intelligence is built around the premise that the prediction is useful to you in guiding future decisions or actions,” Nelson says. “If we don’t act on that prediction, then we need to learn why so that we can improve the system.” (See related article: 4 Strategies for Engaging Clinicians in AI).
So far, for example, only two AI radiology models are reimbursed by fee-for-service Medicare: a tool that eye doctors use to diagnose diabetic retinopathy and an algorithm that hospitals use to diagnose and treat large-vessel occlusion stroke.
As a result, healthcare organizations must devise ways to identify a return on investment for costly AI technologies.
For instance, under a value-based payment arrangement, a hospital that adopts a machine learning tool that helps hospitalists identify and treat sepsis in the early stages will likely realize cost savings because of shorter lengths of stay for sepsis patients.
But demonstrating ROI can be more complex in other cases. For instance, a physician practice that invests in an AI tool that predicts a patient’s risk for heart disease may not immediately obtain an ROI by helping at-risk patients lower their disease risk. “The value might be captured by another stakeholder farther down the road,” says Duke’s Balu.
Applying AI on a broader scale will prove particularly challenging.
“Deploying an AI tool is easy to do on one nursing unit,” Geisinger’s Vawdrey says. “It’s a little harder to do across all units in a hospital. And it’s really hard to do in 3,000 hospitals across the United States.”
Vawdrey and other experts believe a collaborative, multidisciplinary approach is needed for AI to reach its full potential.
“We need to bring together clinicians, technologists, business experts and many others,” Vawdrey says. “It’s going to take a lot of different people, hopefully rowing the boat in a similar direction.”
Stories in this series