Sutter Health, IBM making progress on heart failure prediction model

Predictive model is said to identify patients at risk for heart failure as long as two years in advance, improving the odds of survival.


By the time a heart failure patient arrives at a hospital’s emergency department in distress, it’s too late to reverse progression of the often-fatal condition.

Early detection is essential, but doctors have few existing tools. “It’s a very insidious disease,” said Walter Stewart, vice president and chief research officer for Sutter Health in Sacramento, Calif. “It overlaps with other health problems.” Some 50 percent of heart failure patients die within five years.

Heart failure occurs when the organ cannot pump blood effectively. It can make patients feel fatigued, short of breath, dizzy, nauseous and often causes fluid retention.

For the past three years, Stewart has been working closely with IBM Research to create a predictive model that could identify patients at risk of heart failure as long as two years in advance. IBM announced May 15 that the efforts have so far proven successful.

The improvement in predictive capabilities is timely—experts predict that, over the next 20 years, the number of heart failure patients is expected to increase by 25 percent.

Also See: Analytics model predicts heart failure readmissions

Sutter and IBM combined natural language processing, machine learning and big data analytics to comb through 30,000 patient records. The data resided in the electronic medical record system at Geisinger Health System (Stewart’s former hospital) and was queried and analyzed by the IBM team using Linux compute servers.

The data set includes primary care physician notes, diagnosis codes, signs and symptoms, medications prescribed, time stamps of encounters, lab test results and recorded hospitalizations. “We want the model to be practical,” said Jianying Hu, program director of the Center for Computation Health at IBM Research in Yorktown Heights, NY.

IBM used natural language processing to parse the physician notes for relevant details. For the rest of the data, the team used a variety of analytic techniques and tools, including regularized logistic regression predictive analysis, support vector machine learning, an N-nearest neighbor algorithm for similarity recognition and decision-tree algorithms for querying possible outcomes.

For each patient, the model produces a score that corresponds to how likely he or she will develop heart failure within a pre-specified predictive horizon. The current research shows it is “feasible to create usable models with this kind of observational data,” said Hu. In the next phase, she wants to validate the work on larger data sets from multiple healthcare systems.

Stewart predicts the model will be readily available in the next couple of years for doctors to input patient data and determine who is at risk for heart failure. Hu says the model could also be used is in the back office to run in batch mode to identify high-risk patients.

The model’s results also show that the standard doctor’s guidepost, called the Framingham Heart Failure Signs and Symptoms, falls far short of being accurate in predicting heart failure. Only six of the 28 risk factors tracked in the guideline were consistently found to be predictors of a future diagnosis of heart failure.

“Framingham doesn’t do a good job,” said Stewart. The new model shows it’s possible to accurately predict heart failure one to two years early. “Picking these patients up two years early means we can have a different conversation with them.”

More for you

Loading data for hdm_tax_topic #better-outcomes...