Your mother loves you … well, check that out, and maybe AI as well
In light of the headlong race to adopt artificial intelligence in healthcare, more experts are warning about risks for potential harm.

Way back in my journalism school days at the University of Missouri, we learned early on that the validity and trustworthiness of a story depends on ensuring that the facts were supported by multiple sources.
It was conveyed in the extreme this way – “If your mother says she loves you, check it out.” In other words, even the most widely believed information needs to be double-checked, because that’s when assumptions are made and mistakes slip through that torpedo of validity, trustworthiness … and the credibility of the story, and the person who writes it.
I’d love to tell you that journalists confirm every fact from multiple sources. We flat out don’t do it on each and every fact in each and every story (and I’m certainly in this group that’s gotten lax). We’ve understood the risks and have fallen victim to them. Internet research has taken on much of the role of being the “secondary confirming source,” and that’s a risk. That’s how mistakes are made and why the credibility of the media has suffered.
Well, pressures have changed over the years. Manpower is down. The pressure is soaring to produce stories and results. There’s a multiplicity of channels, once you factor in social media and other outlets. And there’s inherent trust in the information accessed via computer.
Similar driving forces are surging interest in and adoption of artificial intelligence in many industries. Investments and expectations are stratospheric; adoption is more of a race than a research opportunity. And guardrails to ensure proper use of this rapidly advancing technology are deemed insufficient by those tasked with developing the technology.
The overall drive to implement AI is raising concern among those who have been developing the technology. Reporting this week noted that AI experts are warning of the rising dangers of AI. Leading experts at OpenAI, Anthropic and other companies are sounding the alarm, and some are quitting “in protest or going public with grave concerns.”
Doubly concerned about healthcare
The issues surrounding unbounded use of AI in healthcare are just as significant, with more researchers and experts describing potential risks.
In Wednesday’s HDM newsletter, we highlighted how training AI models with bad information can result in recommendations that are wrong and potentially downright dangerous. The research, based on work led by the Icahn School of Medicine at Mount Sinai, demonstrates the risk of using AI in clinical decision-making.
And clinicians are ramping up their use of AI in applications that are not always vetted by the healthcare organizations they work for. This use of “shadow AI,” in the form of unsanctioned AI tools, is growing. A survey by the American Medical Association shows that two-thirds of physicians are using AI for documentation and messaging, while a Wolters Kluwer Health survey found 17 percent of healthcare workers admit to using unapproved AI tools. This use of shadow AI is driven by the need for faster workflows and better functionality, but it poses security risks and challenges for governance, reporting suggests.
Earlier reporting by HDM noted that research organization ECRI highlighted AI chatbots as its top information technology risk, contending that these AI-powered tools “are not regulated as medical devices, nor are they validated for healthcare purposes, even though they continue to be widely used by clinicians, patients and healthcare personnel."
And consumers have expressed fears that AI is being rushed out too quickly, with not enough time being set aside for testing the technology, especially in critical patient-facing applications.
Obviously, there’s a lot of promise for AI and the many ways it can deliver efficiencies and save human busy work. Just this week, Vrishti Talegaonkar, founder and CEO of CareCatalyst, wrote about how AI can drive better work to anticipate claims denials and save everyone time and money by getting ahead of potential issues. And there’s plenty of evidence that clinicians are loving the work savings and precision that agentic AI can bring to record key points in patient-physician interactions.
Still, the warning bells being sounded highlight the risks and the need for care in proceeding.
Yes, you may believe your mother loves you. But rushing into blind assumptions and trust regarding AI … well, that bears checking out.
Fred Bazzoli is the Editor in Chief of Health Data Management.