From floods to frameworks: What It takes to trust data in healthcare

Providers don’t suffer from a lack of data; they’re inundated. They need frameworks that prioritize quality rather than just scaling chaos.



We’re awash in healthcare data, but we’re drowning — not swimming — in it. Everyone’s excited about AI, national data-sharing frameworks and the promise of data-driven care, as well they should be. But if the data behind those innovations is unreliable, the entire foundation crumbles. 

I’ve spent over 20 years working in interoperability, analytics, and AI, helping organizations make sense of data at scale. And here’s the hard truth: unless you build a framework that starts with clean, standardized data, you’re not modernizing healthcare — you’re just automating confusion. 

Start with the data, not the hype 

At a recent panel, I sat next to a brilliant speaker calling for AI agents to run wild in healthcare. Her vision was impressive. But I felt like the guy pulling the emergency brake, saying, “Wait, what’s the integrity of the data we’re feeding these agents?” 

It’s not the sexiest topic, but if you skip over data quality, you skip over trust. And in this business, trust is everything. 

We’ve built some incredible technology – FHIR APIs, national networks, real-time exchange, just to name a few. We can move data across the country in seconds. But what if that data is inconsistent, incomplete or just plain wrong? Then we’re not solving problems — we’re accelerating them. 

Blood pressure values that would be fatal, medications listed 26 different ways and other anomalies show up more than we’d like to admit. Without a framework to catch and clean that data, all we’re doing is moving garbage faster. 

The three-part model that works 

To fix this, there’s a model that's worked consistently across health systems that take data seriously. It’s more than a checklist, because it’s a layered system that reinforces trust from the ground up. 

Start with a reputable common data model. This is the foundation. Organizations need a way to harmonize raw EMR data so they’re not comparing apples to asphalt. Common data models — whether open-source like OMOP or proprietary — provide the structure to translate scattered inputs into a single, analyzable format. Think of it as the Rosetta Stone for healthcare data. 

Apply strong data governance. After data is structured, an organization needs to protect its integrity. Governance is what keeps questionable data out of clinical workflows. That includes policies, permissions, validation checkpoints and, most critically, people who understand what “good” looks like and who are empowered to enforce it. It’s about ensuring decisions are made with data that’s vetted and fit for its purpose. 

Implement data quality reporting and scoring. This is where the real accountability lives. Systems should continuously score the data they ingest, ensuring they are flagging duplicates, gaps or biologically implausible values (like a blood pressure reading that would only make sense for a ghost). These platforms should trace every questionable element back to its source. From there, the organization can take action – retrain staff, adjust system workflows or, if needed, eliminate the data feed entirely. 

What makes this model effective is the cycle it creates: identify > trace > correct > monitor. And repeat. Over time, that builds not just better data, but a culture that treats data like the clinical asset it is. 

It’s not glamorous work, but it’s how you avoid bad decisions, bad outcomes and, ultimately, broken trust. 

Don’t treat analytics like a magic trick 

I’ve heard too many stories describing when a slick-looking report led to real-world consequences. One anecdote is of a physician who kept getting flagged for something she knew she wasn’t doing. The data said otherwise, but no one could explain the source of the data. 

That’s not just a technical failure; it’s a governance failure. If data is being used to drive behavior, you better know exactly what’s behind the curtain. 

On the flip side, I’ve seen it work. One hospital client was buried in 65-page C-CDA documents. Providers weren’t using them. A simple generative AI search was implemented that helped them find just the data points they needed. Utilization jumped to more than 70 percent from less than 20 percent because the data became usable and not because the data had changed. 

In another health system, two out of three physicians changed care plans based on insights surfaced through clean, governed data. This reliance occurred because they trusted what they saw, not because of any ineffective coercion. 

The future Is relevance, not volume 

This is where AI can shine, but only with discipline. An effective future is one in which the data clinicians see is tailored to their roles, their specialties and their patients. Cardiologists don’t need to be buried in discharge summaries; instead, they need to see the four things they need to make decisions. 

We talk a lot about democratizing data. Let’s start by de-cluttering it. 

In healthcare, we don’t get second chances. One bad data point can change a life. So let’s stop chasing scale before we secure integrity. Trust isn’t a byproduct of scale – it’s the prerequisite. 

And if we don’t build frameworks that prioritize quality from the start, we’re just scaling chaos. 

Kevin Ritter is executive vice president for CareInMotion at Altera Digital Health and a veteran digital health leader with more than two decades of experience operationalizing technology and analytics. 

More for you

Loading data for hdm_tax_topic #care-team-experience...