Healthcare AI Is only as trustworthy as the humans behind it
To gain trust, AI developers must be honest about what AI models are, how they’re built and where human judgment belongs in the loop.

Thinking about deploying artificial intelligence in a healthcare setting? Here are some sobering statistics – patients actually trust physicians less when they mention the technology, and a majority of those who use or interact with AI are not confident that health information provided by chatbots is accurate.
When it comes to AI in healthcare, the hype around advanced agents, real-time clinical decision support and end-to-end automation is loud, but it’s largely disconnected from what patients actually think about and experience with AI. The result is a trust gap.
Yet behind every AI model are humans. To gain back trust, the people creating and deploying AI should be honest about what AI models are, how they’re built and where human judgment still belongs in the loop.
The ethical nature of models
One of the most persistent misconceptions is that AI models have some form of independent ethics, deciding on their own what is appropriate.
They don’t. A model is only as ethical as the humans who designed it, trained it and tested it. Accountability still sits with actual people.
Before any clinical AI touches a real patient interaction, it needs to be tested with the most edge-case questions that can be constructed. That adversarial testing is not always comfortable, but it’s the only responsible way to build guardrails that actually hold when a patient says something unexpected. That process is what AI demystification looks like in practice – a methodical, human-led effort to understand what the model does under pressure and to catch it when it goes wrong. This process then needs to be repeated and ongoing.
The organizations that will earn trust are those that can explain, in plain language, what happens when a patient asks a clinical question, how the model decides what to say, and where a human clinician is still the best resource.
Domain expertise cannot be faked
Trust also can quickly erode when AI makes obvious mistakes that anyone with real domain knowledge would have caught.
In pharmacy, those mistakes can happen when the company behind the AI has never actually operated a pharmacy. They may not know why a prior authorization workflow for a specialty drug behaves differently from a standard fill. They may not have dealt with the rarer cases any working pharmacist encounters in the first months on the job.
That’s why those training healthcare AI should have experience handling real patient data and real operational complexity.
Humans in the loop at the right moment
A lot of fear around AI in clinical settings comes from thinking automation means removing humans entirely. But the goal of intelligent automation should be to ensure human judgment is applied where it matters the most.
For instance, a patient who has missed filling a prescription might receive a text message from a traditional pharmacy. If there is no response, nothing further happens. An AI agent built for this use case could reach out by text, follow up with a voice call, then identify from that conversation why the patient has not filled the prescription. Maybe it was the cost, a side effect or they didn’t have transportation to pick it up. With that knowledge, the agent could escalate it to a pharmacist precisely when a human conversation is needed most.
In this way, the AI doesn’t replace the pharmacist. It does the manual, time-consuming work that enables the patient and pharmacist to be ready to have a useful conversation.
A health plan serving a million members wouldn’t be able to hire enough pharmacists to individually manage adherence at scale. But it can deploy agents that triage, engage and escalate, ensuring the patients who could benefit from speaking to a human actually get to talk to one.
What trust really requires
Building trustworthy AI in healthcare really isn’t just a technology problem. The models aren’t perfect, but they are capable enough to be useful.
What’s needed now is to build trust by transparently providing answers to the questions patients, prescribers and payers are rightfully asking. How does a model decide what to recommend? What data is it drawing on? When AI can’t handle a task, what happens? When and where are humans in the loop on decisions?
AI models are built by humans and reflect the knowledge, care and blind spots of the people who build them. If we want to ensure broad AI adoption in healthcare, the industry needs to be clear about exactly how their AI works and why we should trust it, then be able to prove it.
Matthew Hawkins is the chief technology officer of CaryHealth.