Organizations must address fears of AI and find practical applications

Clinicians worry about how advanced computing will be used in healthcare, how it will co-exist with them and whether it can solve the right problems.



This article is a preview of AI BEYOND the Hype, our February 2024 COVERstory. If you have thoughts surrounding AI and its impact on the quintuple aim, please submit your articles via our contributor portal

Artificial intelligence in healthcare needs a makeover, a do-over, a palate cleanse, a refresh. 

At a minimum, those pushing to advance this technology must understand anew how it can panic people. Assurances are needed that the technology will be used with care and caution, and that it can and will be focused on ways to provide benefits that can eventually win people over. 

There’s much ground to be made up with AI and machine learning. Techies get excited because they see the potential and because, frankly, every problematic “nail” in healthcare can be solved with the technology “hammer.” That’s rarely been the case, because successful implementation of technology requires harmonization with the people that use it and the processes that are affected. 

Panic in radiology, and beyond 

AI’s been touted as a solution for a while now, and no one is discounting the good it can do – it’s just not always had a good introduction to clinicians, who can see it as a competitor and a threat. 

That was the case in 2016, when an article in The New England Journal of Medicine predicted that “machine learning will displace much of the work of radiologists and anatomical pathologists.” That set off panic two months later at the Radiological Society of North America conference, creating a negative perception of artificial intelligence at the outset. Years later, much work remains to successfully incorporate AI in radiology, according to presenters at an RSNA session a couple years ago. 

Currently, there are also concerns about the black box from which AI formulates answers, the quality and inclusiveness of the data that goes into AI/ML determinations, and what role it will play with clinicians. It’s got just enough of the trappings of kabuki theater to scare the bejesus out of anyone. 

I can attest to this, after a recent conversation with HDM’s CEO, who has been gaining experience in using ChatGPT and is a proponent of its capabilities. In one meeting a few months ago, he enthused about how he could feed it examples of my previous writing, some past articles on a topic in HDM, some new information on that topic from outside sources and voila! It could churn out new articles in my voice and in my style, with no effort at all! 

So, my gut reaction was, so who needs your 48 years of journalistic experience? I’ve since climbed down from the ledge a bit, but still, I now have a better sense of how indiscriminate hyping of technology can cause people to feel devalued and disintermediated. And this is exactly NOT what needs to be communicated to clinicians, who have carried heavy burdens over the past few years and are prone to the effects of burnout. 

Does it provide value? 

The refocus needed with AI needs to look at whether it provides tangible benefit, first to clinicians and then to healthcare organizations as a whole, within the course of everyday workflows, eliminating duplicative work and disruption. 

Deliverables can’t be pie in the sky or frivolous. I think of some current GMC commercials that show drivers enabling hands-free driving, releasing their steering wheels and breaking into spontaneous thigh and chest thumping in rhythm with the song, “We Will Rock You.” Neat technology, but really, who’s going to be using this while traversing bridges and passing 18-wheelers? 

Frivolity aside, there are some exciting applications of AI and ML that can drive insight into actual medical care, and these have been proven in isolated studies over the past decade. The challenge will be enabling these capabilities to function within workflows of healthcare organizations and to be part of a holistic approach that provides careful background monitoring and backstopping, rather than one-off solutions that don’t integrate well. 

A realistic growth path 

Fortunately, there’s a lot that AI and ML can do that could provide real value to clinicians and hospital staff, in ways that dovetail with current needs, particularly in reducing clinician burden. 

For example, the ability of mixing technologies to enable ambient listening can help clinicians finish notes faster, which can alleviate documentation burdens that invade doctors’ personal time. This is already happening – for example, a combination of Nuance DAX with Epic is in use at University of Michigan Health-West, and other technology vendors, such as NextGen, MEDITECH and Oracle/Cerner, are pursuing similar uses of generative AI. 

Additionally, AI can help with patient monitoring, bringing intelligence to the raw information pouring in through remote patient monitoring, enabling clinicians to better anticipate or identify detrimental changes in patients’ conditions. Or it may be able to help sort through volumes of medical research to suggest what studies are the most appropriate for treating patients with esoteric or complex medical issues. 

If AI and ML can be focused primarily on reducing clinician burden, it can free up time for clinicians to do what they entered the medical field for – engaging with patients, building relationships that are meaningful to both and can actually improve care, compliance and results. Showing these kinds of results can win over clinicians who have been panicked by the scary imagery of artificial intelligence. 

It can be hard. I know when HDM associates art with an HDM story on artificial intelligence, it can gravitate toward something like lightning bolts shooting out of a floating human brain. There’s just a connotation with the background technological capability – artificial intelligence – rather than a focus on concrete deliverables (maybe some that seem mundane and trivial) that can offload routine burdens that can make a difference in clinicians’ lives. 

That’s the exciting part – and the industry is on the cusp of showing that AI can help clinicians improve what they do for patients. Yes, guardrails and federal guidance can help provide assurances that care will be used in deploying AI technology in healthcare. But showing potential benefits and the interplay with people and processes can win back the confidence of clinicians as well. 


This article is a preview of AI BEYOND the Hype, our February 2024 COVERstory. If you have thoughts surrounding AI and its impact on the quintuple aim, please submit your articles via our contributor portal

More for you

Loading data for hdm_tax_topic #care-team-experience...