ACHDM

American College of Health Data Management

American College of Health Data Management

What we learned about AI that clinicians will actually trust

Just turning to emerging technology because it’s new isn’t a valid reason – enabling people to provide healthcare is critical to achieving results.



Healthcare does not need another wave of technology that looks impressive in a demo and quietly fades in daily practice. We say that from two different vantage points inside the same reality.

One of the authors of this piece spends her days in UX and usability work – specifically studying how people actually use systems, where friction hides and why good ideas often fail at the point of workflow. The other lives in clinical operations, where time is scarce, cognitive load is real and anything that adds cleanup work quickly becomes a nonstarter.

In discussing emerging technology and AI in healthcare, our conviction is simple – the next leap forward won’t be driven by hype or tool-chasing. It will be driven by disciplined problem-definition, careful workflow observation, intentional task allocation and trust-by-design.

Start by measuring what exists today

Before any organization implements a new innovation or AI capability, there’s a baseline question that must be answered. What does the current experience look like - and how are we measuring it?

If you don’t measure the current state first, you can’t prove improvement later. You can tell a story about progress, but you can’t demonstrate it. And in a world where leadership, clinicians, patients and boards are increasingly skeptical of “innovation theater,” measurement matters.

Baseline metrics can be straightforward: time-on-task, error rates, rework, completion rates, satisfaction, training time, and even how often users seek help or escalate issues. But the bigger point is discipline: establish the before picture so you can honestly evaluate the after.

What problem are we solving?

In nearly every AI conversation that occurs, the tool arrives first, and the problem arrives second. That should be flipped.

The real starting point is the problem. What hurdle are we trying to overcome? What friction are we trying to remove? What outcome are we trying to change?

And there’s an adoption nuance that healthcare leaders can’t ignore. It’s not enough for leadership to believe a problem exists. Users have to recognize it, too.

If the people doing the work don’t feel the pain, they will experience “improvement” as disruption. But if the pain is real and shared, they become partners in the solution. That changes everything – willingness to test, tolerance for iteration and, ultimately, trust.

Observation reveals work people can’t describe

One of the most persistent mistakes in technology design is assuming users can fully articulate what’s broken. They often can’t, not because they’re withholding feedback, but because they’ve normalized the friction. They’ve adapted; they’ve learned workarounds; they’ve built muscle memory.

When usability is assessed, the people being observed initially don’t always know there’s a problem. But observing real workflows uncovers unmet needs that interviews and surveys never find. You find the extra clicks that “don’t count;” the repeated steps that feel inevitable; the shortcuts that create risk; the manual tracking that no one wants to admit still exists.

From the clinical side, muscle memory is powerful, and it’s often the hidden force that determines whether a tool gets used or avoided. In a busy clinic day, people will default to what seems familiar, fast and reliable. If a new tool disrupts that without providing a clear payoff, it won’t survive.

So if the goal is for AI to deliver value, don’t start by asking users what features they want; start by watching what they do.

Task allocation is the real design decision

AI discussions often revolve around capability: Can we automate this? Can we generate that? Can we recommend this?

In our view, the most important design question is different:

Where do users want the work to live?

That requires task analysis that goes deeper than “steps.” It means understanding the nature of the work and deciding intentionally where automation belongs vs. where human judgment must stay in control.

There are areas in which automation can be a gift – repetitive, mundane, administrative burdens that drain time without improving care. But there are also areas in which automation can quietly introduce new risk – ambiguous clinical decisions, nuanced judgment calls and situations where context matters more than pattern recognition.

From the clinical seat, here’s the simplest litmus test. If AI creates more work for the clinician, it’s not helping – it’s shifting burden. If output requires constant correction, verification or hunting for the source, it increases cognitive load. It may look efficient on paper, but it feels expensive in practice.

The goal is not automation for automation’s sake. The goal is to enable the right work in the right place, and always with humans anchored where it matters most.

Clinical impact must be measured

Usability improvements are often easier to quantify. These include reduced time, fewer errors, better satisfaction, fewer clicks and less rework. But healthcare leaders shouldn’t stop there.

The harder questions, and the more important ones, are clinical value. Does this actually improve care? Does it improve outcomes? Does it reduce harm? Does it support better decisions?

Clinical impact often takes longer to show up than workflow efficiency, but it still needs a measurement plan. Otherwise, organizations end up optimizing for speed while missing the point of healthcare entirely.

A tool that makes documentation faster but increases clinician fatigue is not a win. A tool that saves seconds but increases risk is not a win. A tool that produces plausible output without traceability is not a win.

Trust is not a vibe – it’s evidence

In healthcare, trust is never abstract. Trust is operational.

It sounds like practical questions. Where is the data stored? How long is it stored? Who has access? Who governs it? How do we guarantee those answers?

And for users, trust also includes transparency. Can I understand why the tool is making a recommendation? Can I see where the information came from? Can I verify it without burning time?

If AI output feels like a black box, the user pays the price in cognitive effort, and that erodes adoption. Transparency reduces the mental tax required to believe the system.

Trust must include testing for unintended consequences, including stress-testing, scenario testing and designing for safety under real-world constraints, not idealized workflows.

Innovation can widen gaps

There’s another dimension to human experience that doesn’t get enough attention – access. New technology can create separation if only some organizations can afford it, implement it, support it and optimize it. Over time, the gap grows.

And access isn’t only financial. It’s also digital literacy, language, connectivity, disability access and whether workflows reflect diverse user needs. Telehealth is a helpful reminder here. The real question isn’t “Is telehealth good technology?” The question is whether it increases access, and what happens to patients when it doesn’t.

If we don’t design for access on purpose, we will accidentally design inequity.

A checklist for health data leaders

If you’re a health data leader evaluating AI or emerging technology, here’s the approach that creates durable adoption.

Benchmark first. Measure the current experience before you change it.

Define the problem clearly. Don’t let the tool lead the strategy.

Confirm users feel the pain. Adoption requires shared recognition.

Observe real workflows. Muscle memory hides friction users won’t report.

Allocate tasks intentionally. Automate the repetitive; protect judgment.

Measure clinical value, not just efficiency. Speed isn’t the same as impact.

Design trust with evidence. Transparency, traceability, governance and testing.

Design for access. Diverse users, diverse settings, real constraints.

As AI technology rapidly advances, it is important to prioritize a human-centered approach. We must create digital experiences that leverage innovative technology and fundamentally respect and understand people.

Tammy Coutts is lead software designer specializing in user experience and usability for MEDITECH. Randall Brandt, PA-C, has 30 years of experience as a physician assistant at Mile Bluff Medical Center in Mauston, Wis.

More for you

Loading data for hdm_tax_topic #care-team-experience...