Accurate patient identification is a fundamental prerequisite for quality care, which is why resolving the long-standing patient matching challenge has moved front and center as a top priority. Not only is misidentification expensive, but it exposes healthcare organizations to significant risks from privacy breaches, fines and litigation.

The gravity of the issue was underscored in this May, when healthcare’s most powerful provider, payer, IT and interoperability organizations collectively called on the Department of Health and Human Services (HHS) to make patient matching and safety a national priority. In a letter to HHS, the group noted that the “absence of a consistent approach to accurately identifying patients has resulted in significant costs to hospitals, health systems, physician practices and long-term post-acute care facilities, as well as hindered efforts to facilitate health information exchange.”

How costly? An estimated $17.4 million per year in denied claims and potential lost revenue for the average facility, according to the Ponemon Institute’s 2016 National Patient Misidentification Report, a survey of more than 500 top-level U.S. healthcare executives and care providers.

Indeed, the administrative cost to resolve a single duplicate is as high as $100 per record by some estimates. That includes time and materials wasted by researching, identifying and correcting duplicates in real time. Taking it a step farther, a recent survey by Black Book Research found that the expense of repeated medical care due to duplicate records costs hospitals $1,950 per patient per inpatient stay and more than $800 per emergency department visit. Healthcare organizations also face negative revenue cycle impacts due to missed collection opportunities and billing delays associated with 10 percent of all duplicates.

Even more concerning than the associated costs are the clinical consequences of duplicate or overlaid records identified by the Poneman report; 86 percent of respondents said they had witnessed or known of a medical error that occurred due to patient misidentification. Further, nearly 5 percent of duplicate records create a potential clinical impact, and more than 2 percent result in a duplicated imaging study.

These dire statistics prompted the newly formed industry alliance to warn HHS that “as data exchange increases among providers, patient identification and data matching errors will become exponentially more problematic and dangerous.” Accurate patient matching, they wrote, is essential to care coordination and the healthcare system’s ongoing transformation. Failure to resolve the problem would thwart efforts ranging from interoperability to precision medicine and disease research to solving the nation’s opioid crisis.

When it comes to patient matching, many healthcare organizations are blissfully unaware of their role in the crisis. The figure that is anecdotally accepted across healthcare is an average duplicate rate of 10 percent. However, based on ARGO research, that barely scratches the surface of the problem.

Based on reviews of more than 160 million records across 78 engagements, a truer picture of the average rate of unresolved duplicates for a typical hospital’s enterprise master patient index (EMPI) is 19 percent, while integrated delivery networks come in at 32 percent and HIE rates are 26 percent. In addition, half of all providers have duplicate rates of less than 8 percent and one-third have a rate of 8-30 percent.

The reality is that every EMPI contains duplicates. Just how many depends largely on the approach each facility employs for patient matching. Research by ARGO reveals that basic matching algorithms in hospital information systems miss over 80 percent of duplicates, while intermediate rules-based solutions miss at least 50 percent. Most probabilistic matching algorithms are only slightly more effective with a miss rate of just over 26 percent. Incorporating artificial intelligence/machine learning is the most effective, missing a mere 1 percent of duplicate records.

Another model, referential matching, is often portrayed as the holy grail of patient identification and matching. Rather than comparing potential duplicates within an EMPI to each other, this model compares them to records in a national reference database that contains data aggregated from multiple, disparate commercial and government sources. It’s a compilation of demographic data that many believe to be superior for solving the mystery of the duplicate record.

However, referential matching is not without substantial risks for providers and stakeholders. Healthcare organizations would be stripped of their control over data provenance and governance, even as vendors back away from any liability for false positive matches. Additionally, referential matching has a published accuracy rate of only 60 percent. Thus, because the typical imbedded EMPI system identifies at best a little over half of all duplicates, the resolution rate for referential matching is just 30 percent—far from the “magic button” territory.

With the Office for the National Coordinator (ONC) setting an accuracy goal of 0.5 percent by 2020 and 0.01 percent by 2024, pressure is mounting to find a viable solution to the nation’s patient identification and matching crisis. Unfortunately, no single solution holds the key.

Rather, the best approach is one that combines what we know to be effective—probabilistic matching using machine learning and natural language processing—to identify existing duplicates with biometric authentication to prevent new ones from entering the EMPI. By actively integrating this two-pronged approach into front-end registration and scheduling workflows, existing records can be located more rapidly and accurately.

For example, adding driver’s license readers and biometric palm vein scanners or iris recognition cameras into the registration workflow and integrating advanced probabilistic patient search methods behind the scenes has been shown to offer the highest level of patient record accuracy and duplicate record prevention on the market today. When duplicates are eliminated at the front end, they cannot proliferate into multiple downstream systems where their negative impacts will grow exponentially, along with the resources required to eradicate them.

No solution that affects clinical and administrative workflows is complete without established best practices. In the case of patient identity, AHIMA has published robust guidelines for patient matching at the point of registration.

Effective application of technological computing advancements over the last five years in machine learning and statistical methods can now be applied to the patient matching problem that has plagued healthcare for decades. Ultimately, an optimal approach incorporates the latest technical advances with proven best practices. Mainstream adoption of this model can go a long way toward moving the needle—for patient record duplicate rates—to zero.

Register or login for access to this item and much more

All Health Data Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access