While healthcare rushes to implement AI, consumers worry
A recent survey finds that patients overwhelmingly fear how AI is being used – even as they acknowledge they are using it personally.

The use of artificial intelligence continues to come like an express train in healthcare. However, consumers are still a little leery about getting run over, fearing the train is a bit out of control.
A consumer survey by the Coalition for Health AI verified consumers’ concerns, suggesting that the lack of uniformity around policy initiatives is likely to increase worries and further heighten apprehensions about how AI is used in healthcare.
Providers are implementing AI to achieve gains in areas that are not patient-facing and where efficiencies can provide tangible results, such as in the business office, claims processing and administrative areas. Despite this, consumers overwhelmingly worry about how AI is being used – even as they acknowledge they are using it personally, CHAI’s survey data show.
Experts looking at the survey data say that healthcare organizations won’t be able to shorten the route to AI acceptance – an infrastructure to fully vet its use is necessary to build a foundation for trust.
Consumers’ worries about AI
CHAI coordinated the research with the California Health Care Foundation, NORC at the University of Chicago and their AmeriSpeak panel and CHAI’s policy workgroup. It also included focus group attendees that included patients, clinicians and vendors.
The researchers wanted to study AI from the aspect of transparency, which it defined as “the extent to which information about an AI solution (its capabilities, limitations and purpose) and its output is available to all relevant stakeholders,” according to the survey report. The goal was to understand “how the public experiences and evaluates the growing use of AI in healthcare.”
The concern is that AI implementations are progressing, but “public trust, consent and accountability mechanisms have not kept pace.” Findings, researchers hope, will inform “policymaking on governance mechanisms that actually improve public confidence and protect patients.”
The survey sought input from 1,456 patients, nearly a third of whom were older than age 60 – an age cohort that is more likely to interact with the healthcare system. Nearly two-thirds have at least some college education – a demographic that is likely to be familiar with using technology and the emergence of AI.
About three-quarters of respondents report using AI, but only 13 percent say they feel very comfortable with it, compared with 20 percent reporting they are “not at all” comfortable and 36 percent saying they are only “a little” comfortable with it.
Underlying their concern is data commercialization. “Most respondents know their data is being used to train AI and consistently want to retain primary control over it,” the analysis notes. AI is having an inverse effect on trust, data show, with “more than four times as many respondents say AI use makes them trust healthcare less (51 percent), rather than more (12 percent).”
The need for accountability and transparency
For this to change – especially if AI is to play an increasingly influential role in healthcare – perfunctory disclosures that the technology is being used won’t be enough. Trust can be built up only after consumers understand safeguards, accountability “and the role AI plays in clinical decisions.”
That’s where extensive, well-reasoned governance is crucial, researchers contend. “Human in-the-loop oversight is critical, especially in high-stakes contexts affecting care pathways and reimbursement.”
A joint effort is necessary, because “no single institution is viewed as a trusted overseer of health AI,” the researchers conclude. “Instead, respondents favor multi-layered governance involving independent non-profits, health systems and provider networks, and federal regulators.”
Beyond oversight from those organizations, there’s a need for “independent evaluation, bias testing, performance transparency across populations, and data transparency,” all of which serve as building blocks to increase public trust.
Understanding the need
Most health systems haven’t taken actions to shore up this “trust deficit,” researchers contend. That’s because there’s underlying foundational work that needs to occur first to achieve the multi-layered accountability required to allay consumers’ fears.
It’s important work that should be done in advance of or at least concurrently with AI implementation, contends Peter Eason, chief financial and chief operating officer of Ferrum Health, a healthcare AI platform provider based in Sunnyvale, Calif.
That advance work must be built with internal resources. “Multilayered accountability means clear ownership inside the health system – named clinical and operational leaders responsible for oversight, performance and escalation, supported by tools that provide unbiased validation,” Eason says. “Internal ownership establishes accountability, while tooling ensures stakeholders can act with confidence.”
There’s been a rush to get on board with AI applications, but a lack of understanding about the need for transparency and credibility in implementations, to build trust at multiple levels. Ferrum is working with health systems such as Sutter Health and Carle Health “so they can use independent observability to catch bias and drift, ensuring humans stay in the loop and patient data never leaves the hospital firewall.”
But overall, organizations are slowly grasping the importance of this process in building trust, Eason contends.
“Accountability forces hard choices by attaching responsibility to decision-making — much like in our personal lives. When establishing accountability, groups want to get it right from the outset, which creates natural hesitation,” he explains. “It’s important to leverage the learnings of others and existing frameworks, and to recognize that not everything must be built from scratch. Many health systems are now standing up AI councils and using frameworks and third-party tools to support effective decision-making and provide visibility into downstream performance and outcomes.”
Fred Bazzoli is the Editor in Chief of Health Data Management.