How machine learning can speed quality measure development
Quality measures have become an essential component of the healthcare system. But ensuring that new measures are valid, reliable and evidence-based is a time- and labor-intensive process.
Advanced algorithms using machine learning and natural language processing can help measure developers significantly reduce the research time involved and find patterns in the evidence base that human readers may miss.
Such automation also may eventually help healthcare organizations themselves with self-improvement efforts based on their own research and their data.
Over the past several years, public and private healthcare payers, including the Centers for Medicare and Medicaid Services, have been moving toward a value-based purchasing model in which healthcare providers are evaluated, and ultimately paid, by the outcomes they achieve rather than the volume of services delivered. The quality measure program provides an objective instrument to evaluate and track provider performance and drive improvements that lead to better patient outcomes or reduced healthcare costs.
To make this vision a reality, healthcare payers need to ensure that quality measures are valid, reliable and based on robust scientific evidence. To develop each measure, researchers must find and evaluate the relevant research that has been published on each topic, a process known as environmental scanning.
That has meant searching by keyword through the National Library of Medicine as well as policy documents, study results and summary information available through other libraries—a corpus containing millions of existing documents and growing daily. Human searchers then must evaluate all of the articles that are returned to find the ones that are most relevant, reliable and useful for measure development. This process can take hundreds of human hours per measure.
This is where machine learning can help. Advanced algorithms using machine learning and natural language processing can accelerate the review scientific research and make large knowledge databases useful and usable for both researchers and clinicians.
There are several different types of quality measures. The Agency for Healthcare Research and Quality (AHRQ) defines three types of measures:
- Structural measures look at the organization’s systems, capacity and processes, such as the ratio of providers to patients, physical facilities and equipment, the expertise they have on staff and training programs.
- Process measures focus on the care given, including treatments, preventative services, diagnostics and other actions taken by healthcare providers to maintain or improve health for patients.
- Outcome measures are focused on the impact that a service or treatment has on patient health status, such as morbidity, mortality, complications or improvement/deterioration in the condition being treated.
Within the structure-process-outcome model, the assumption is that effective structures enable good processes, and good processes increase the likelihood of a positive outcome. Measure developers look for these evidentiary links when designing quality measures and determining appropriate targets.
The majority of quality measures—in particular measures related to outcomes and processes—can be broken down into a standardized formula. These measures are expressed as a percentage, where the denominator is the target population and the numerator is the measure focus (the process or outcome we want to measure).
Measure focus = emergency room visits
Target population = patients with asthma
In this example, we would divide the number of emergency room visits within a timeframe by the total number of asthma patients under the provider’s care to determine what percentage of asthma patients ended up in the emergency room during that time.
The measure then defines a change concept (or quality construct)—that is, an action taken by the healthcare provider that is intended to produce a change in the numerator of the formula. For asthma patients, this might be prescribing a medication for better long-term maintenance. For an outcome measure, the quality measure provides a prediction that when a certain action is taken (in this case prescribing a particular medication) there will be a positive change in the outcome (i.e., a reduction in emergency room visits).
A process measure has a similar structure, but instead seeks to measure how well the healthcare provider is implementing processes that have been defined as best practices. What percentage of the time are nurses providing appropriate discharge instructions to head trauma patients? What percentage of patients older than 50 are current on their colonoscopies?
Well-designed quality measures give healthcare providers evidence-based targets to benchmark their performance against and help them identify opportunities for improvement. Measure developers use the existing evidence base to set appropriate targets and ensure that measures support best practices grounded in valid research.
The highly structured format of quality measures makes them an ideal target for machine learning. Machine learning is a type of artificial intelligence that uses classification algorithms to sort through large volumes of data and detect patterns. Natural language processing allows the algorithms to “read” and “understand” documents written in natural (human) language. Using both of these methods together, computer programs can be designed to extract knowledge from scientific literature.
This is exactly how machine learning speeds up the process of healthcare quality measure development. Instead of conducting a simple keyword search, the program scans the content of the documents to determine which documents are most relevant to answering the question at hand.
For measure development, it must search for documents that contain the elements of a healthcare measure: e.g., the process or outcome, the target population and the change agent or opportunity. The most valuable documents will contain all three along with quantitative evidence that enables researchers to draw a valid conclusion: for example, “patients with Type II Diabetes (target population) reduced (change direction) the number of days their blood sugar was elevated (outcome) by 30 percent (quantity) when they added 30 minutes of daily aerobic exercise (change agent).”
Finding these correlations using keyword searches returns many articles that are generally about the target population, through which measure developers then must manually search and then develop guidelines. Artificial intelligence can significantly speed up the environmental scanning part of the process, cutting research time from months or years to days or hours by identifying a much more targeted set of literature and providing the measure developer summary information about the article.
To address the problem of measure development, a program using ontology-based categories and axioms can be applied to construct a knowledge base that enables meaning extraction. Axioms known as “triples” can enable a program to find relationships in the data that can be used for measure development. For example, “hand washing by nursing staff (change agent) reduces the spread of staphylococcus” (outcome) for hospital-bound patients (target population).
For measure developers, such programs can quickly sort through bodies of relevant literature and create a database of facts. They can then use an ontology to determine the relationships between these facts and map them onto the basic structure of a quality measure.
Human measures developers still need to review the extracted knowledge, evaluate the conclusions and do the work of writing the actual measure. But using AI for environmental scanning reduces the most laborious part of measure development to a small fraction of the time that humans would need to complete the process. By partnering with learning machines, the industry can not only make measure development faster, but also ensure that measures are based on the very best evidence available.