AI just at the early stages of showing real results

Register now

Artificial intelligence—a broad set of technologies that enable machines to mimic the human brain’s ability to process information, learn and adapt—holds potential in healthcare to improve patient outcomes and reduce costs, but it hasn’t yet been widely adopted in daily clinical practice.

However, some leading healthcare organizations, such as the Cleveland Clinic, Intermountain Healthcare and others, are beginning to build the infrastructure and data science capabilities to use AI to deliver clinical and financial benefits.

While some industries are using AI programs designed to recognize speech, written language or visual data or do problem-solving, health systems are gaining experience with machine learning, a subset of AI focused on finding patterns or relationships in data in an iterative, or learning, fashion. Early projects have demonstrated promising results.

In some of these cases, healthcare organizations have purchased a commercial tool to help them reach a specific clinical goal, such as reducing hospital readmission rates or predicting which patients are at highest risk of becoming expensive cases. The work often incorporates machine learning techniques to hone existing models or clinical processes, with the aim of improving accuracy.

Betting on the future

The potential for AI to uncover actionable insights from electronic patient data has convinced venture capitalists and software developers alike to invest in the healthcare arena. In a 2016 report, Frost & Sullivan predicted that the revenues in the healthcare AI marketplace will explode from $633.8 million in 2014 to nearly $6.7 billion in 2021.

The Cleveland Clinic is one health system actively working with machine learning.

It spent more than three years building an infrastructure to support advanced analytics. The technology platform includes both a structured database environment using Teradata and a Hadoop database environment using Cloudera. The health system uses analytics tools from SAS and supports open-source programming languages, such as Python and R.

“We also recognize that we are not always going to be starting from scratch,” says Christopher Donovan, executive director of enterprise information management and analytics in the division of finance and information technology at the Cleveland Clinic. “We also think about how we are going to engage with partners in the system.”

For example, the Cleveland Clinic developed a test for IBM’s Watson Health cognitive platform to see if Watson could create a problem list based on the information—both structured and unstructured—in a patient’s electronic health record. Using de-identified data, “they were able to get some good results with it generating a problem list,” Donovan says. The next step is to figure out how to take that work beyond the research phase and apply it to clinical decision support, he adds.

The Cleveland Clinic also has used machine learning to develop applications from scratch, such as a set of tools to identify patients at risk of racking up big medical bills.

For the first step, they used a variety of mathematical methods—neural networks, decision trees and gradient boosting—to develop algorithms that rank the patients assigned to care coordinators. The scores, which are updated monthly, augment existing registries to help care coordinators decide how to manage their caseloads.

The team also developed algorithms to identify patients who are not enrolled in the care coordination program but are at risk of becoming high-cost cases in the future. However, that tool has not yet been incorporated into the clinical workflow, an essential step to enable case managers to intervene. “We might be interacting with those patients a little differently,” says Joseph Dorocak, senior financial analyst at the Cleveland Clinic.

At Ohio State University Wexner Medical Center, researchers in the radiology informatics lab also are using machine learning to build tools that help clinicians manage their workloads. For example, they developed an algorithm that prioritizes computed tomography images of the head based on whether there are critical findings.

Radiologists learn of the potential seriousness of a given imaging study when a referring clinician labels it as stat, explains Luciano Prevedello, MD, division chief in medical imaging informatics, adding that this is not an ideal system for prioritizing workflow in radiology. Sometimes images show critical findings the ordering physicians didn’t anticipate, he says. And even studies labeled stat—about 40 percent of all studies—vary in degree of urgency.

To build the tool, researchers trained an algorithm using a data set of 2,583 head images and validated the tool with a second set of 100 head images. The next step is to set up a clinical trial. “This is an important step to see if what we developed in the lab can be expanded to a clinical setting,” Prevedello says.

Commercial solutions

Instead of starting from scratch, Intermountain Healthcare has purchased commercial products to help improve its clinical processes and patient outcomes.

For example, Intermountain—which has 22 hospitals, 1,400 employed physicians and more than 185 clinics—began working with Ayasdi, an AI vendor in Menlo Park, Calif., in 2014.

“The first thing we did was try to validate that (the Ayasdi solution) would work on our data,” says Lonny Northrup, senior medical informaticist at Intermountain. To do this, the provider fed data on colon surgery into the tool. Colon surgery was selected because the health system had an established clinical care pathway for the procedure.

“In a matter of two or three days, it cranked through the data,” Northrup says, adding that the tool replicated “a substantial portion of what we have done over eight years in the insights it was able to derive from the data.”

Since then, Intermountain has used Ayasdi’s tool to refine other care pathways. For example, Intermountain plans to roll out a revised care pathway this year for treating newborns with high fevers. Northrup predicts that the changes, which he declined to discuss in detail, will reduce the average length of stay and impact thousands of babies throughout the health system.

Intermountain also plans to use the tool to track how well physicians are adhering to about 70 care pathways developed by the healthcare organization. “It has the ability to do that with more granularity than we can get with our other solutions,” Northrup says. “If we are not getting the adherence we want, we will have the data to show the underperforming physicians how the better-performing physicians are getting better results by following the care model.”

Intermountain has been working with other machine learning vendors as well. For example, Intermountain in 2016 became a lead investor in Zebra Medical Vision, a machine-learning analytics imaging company. In 2017, Intermountain, which has a library of more than 3 billion medical images, announced plans to deploy Zebra’s technology to help Intermountain’s radiologists diagnose diseases.

Intermountain also is evaluating a tool from Jvion, Johns Creek, Ga., to create personalized health risk profiles for individual patients and recommendations about how to lower their risk for deteriorating health. “Our initial validation of their platform is around avoidable admissions, and the findings we are generating are extremely encouraging,” Northrup says.

Assisting ER cases

Like Intermountain and the Cleveland Clinic, MedStar Health, which operates 10 hospitals in Maryland and the Washington metropolitan area, also is evaluating the applicability of AI to solve clinical problems.

MedStar’s Institute for Innovation worked with Booz Allen Hamilton to develop a tool for emergency department clinicians. The tool, called Dictation Lens, uses natural language processing to sort through unstructured electronic patient data, such as clinicians’ notes, and pull out those that are relevant to a patient’s current medical complaint.

“On average, MedStar patients have 50 to 60 notes in their history,” which is too many for an ED physician to sort through manually, says Ernest Sohn, a chief data scientist at Booz Allen.

A handful of ED physicians at MedStar tested the tool last year. Based on feedback from those physicians, the MedStar/Booz Allen team plans to refine the tool this year and then retest it.

The prototype took between 10 and 20 seconds to present pertinent notes to ED clinicians, which is too slow, says Kevin Maloy, MD, an emergency department physician and informaticist with MedStar’s Institute for Innovation. To solve the problem, they plan to change the backend data processing so it begins culling through clinicians’ notes when a patient registers in the ED, ensuring that the information will be available to clinicians when they open a patient’s record, Maloy says.

Citing Dictation Lens as an example, Sohn, Maloy and other authors of a 2017 blog post in Health Affairs, wrote about machine learning’s potential to perform mundane and time-intensive tasks for physicians. “By draining time, energy and attention, such tasks can lead to clinician burnout and hinder clinicians’ ability to practice at the top of their expertise when providing care,” they wrote.

Overcoming challenges

However, there are significant barriers to widespread adoption of machine learning and other AI technologies in healthcare to perform mundane tasks, organize workflow, diagnose disease, predict outcomes, or prescribe treatments or behavior changes. This is particularly true for smaller organizations because they have fewer financial, technical and intellectual resources than large health systems or academic medical centers.

When it comes to financial considerations, AI adoption competes with other pressing issues in health information technology, according to a survey of health system executives conducted by the Center for Connected Medicine at the University of Pittsburgh Medical Center and The Health Management Academy, Alexandria, Va.

Of the 20 respondents to the survey, “Top of Mind for Top U.S. Health Systems 2018,” 63 percent said investing in AI solutions would be a low priority in 2018, compared with spending in other areas, such as cybersecurity or virtual care. Those health systems plan to spend an average of 2.6 percent of their IT budget on AI in 2018, and 13 percent plan to spend no money on AI in 2018.

Where they have implemented AI solutions in previous years, it was typically in operational areas, such as revenue cycle management, survey findings revealed.

In addition to budgetary constraints, there are technical hurdles to overcome. Chief among these is access to large, vetted data sets, so that machine learning algorithms can be “trained” to recognize the correct answer to a given problem, such as which images show cancerous tumors. Researchers also need access to a second data set to validate an algorithm’s performance, says Paul Chang, MD, professor and vice chairman of radiology informatics at the University of Chicago School of Medicine.

Another issue is the underlying IT infrastructure. “Our IT systems are immature in healthcare,” Chang says. “We can’t get vetted data.”

Pertinent data is stored in disparate systems, such as numerous inpatient and outpatient EHR systems; ancillary systems for radiology, pharmacy or other departments; billing systems; and patient-generated data from social media sites, monitors or wearable devices. Because of variation in databases and data types, it’s difficult to get that together to enable AI solutions to draw conclusions.

Even within a single system, such as an EHR, data on clinical outcomes often is difficult to find because it is not captured in a standardized way. In their blog post, Sohn and Maloy wrote that pain scores were captured “incompletely and inconsistently” in MedStar’s EHR, which made it difficult for them to build a model to predict patients’ pain scores.

After an algorithm is built and deployed into workflows, sophisticated data governance also is needed to maintain both data sets and algorithms over time. For example, the Cleveland Clinic’s risk predictor is an automated process that runs data through numerous mathematical models each time the process kicks off, and then automatically generates results from the model that gives the most accurate predictions that day.

IT staff members at the Cleveland Clinic built the automated process to prevent model degradation over time. If one of the mathematical models falls below acceptable levels of performance consistently, “the goal would be to reevaluate that specific model on its own; tweak it; fine tune it as needed; and enter it back into the process,” says Michael Lewis, senior director of healthcare analytics.

Workflow constraints

Even after solving the myriad data extraction, model validation, data governance and other technical issues, healthcare organizations may need to develop new workflows to respond to the knowledge generated by these advanced analytical tools.

That is the case at Memorial Sloan Kettering Cancer Center, where data scientists have developed a model to predict which chemotherapy patients are at risk of showing up at the health system’s urgent care center and possibly being admitted to an inpatient unit.

Now, the healthcare system is mapping out new processes—including the use of telemedicine and ongoing patient engagement—to mitigate patients’ risk of going to the urgent care center. “There is a heavy lift. It is an ambitious use case,” says Stuart Gardos, chief data officer at Memorial Sloan Kettering.

The Cleveland Clinic’s Donovan urges CIOs to help build an organizational culture in which people are willing to incorporate new insights into their daily work and decision-making processes. “AI and machine learning are big buzzwords and people are saying, ‘We really need to use this,’” he says. “We need to not only produce this stuff, but we need to be able to use it—to make decisions with it.”

For reprint and licensing requests for this article, click here.