Harnessing AI with care: Healthcare’s governance challenge 

Executives see a need to temper excitement with oversight in the rapid integration of AI into healthcare practices.



This article is part of AI BEYOND the Hype - March/April 2024 COVERstory.

There’s soaring interest in implementing artificial intelligence solutions at healthcare organizations, but a dearth of formal policies addressing their use. 

That’s one of the findings of the recently released Top of Mind for Top Health Systems survey, conducted by the Center for Connected Medicine and KLAS Research. 

The survey, which sought in-depth responses to 80 executives and other leaders at about three dozen U.S. hospitals and health systems, underscores the dichotomy between the hype surrounding AI and organizations’ ability to ensure they have oversight over its use. 

The report’s introduction notes that AI “is dominating the thoughts of many executives at health systems. While AI has ranked highly in past Top of Mind surveys, it was overwhelmingly cited in 2023 as the most exciting emerging technology.” 

A frenzy for AI 

Artificial intelligence was unabashedly top of mind for the respondents to the CCM/KLAS survey, with 80 percent of respondents mentioning AI as the most exciting emerging technology for healthcare, and researchers noting that it displaced virtual care as the technology seeing the most progress.

Generative AI is sparking the preponderance of the excitement, the Top of Mind report indicated. “While use cases for data science solutions have mostly focused on clinical problems, health systems increasingly see AI as a tool that can also improve operational, financial and efficiency related challenges,” the report noted. 

But researchers also noted concern about oversight and proper use of the new technology. Researchers found very few health systems have written formal policies addressing the use of AI, and even fewer have policies specific to generative AI. 

Only 16 percent of survey respondents reported that their organizations had a system-wide governance policy in place. However, many respondents said their organizations recognized the need for serious oversight, reporting that their health systems have formed governance committees of senior executives from various departments to oversee AI. 

Potential benefits of AI must be counterbalanced with oversight, contends Robert Bart, MD, chief medical information officer at UPMC. 

“There are many ways healthcare can and will benefit from AI, including freeing up our clinicians to focus more on caring for patients and helping systems more efficiently process a range of tasks,” says Bart of UPMC, a founding partner of the CCM. 

“But it is essential that healthcare executives also take seriously the responsibility to protect our patients’ privacy and health data,” he adds. 

In questions related to the use of generative AI, health system executives identified improving efficiency, bringing more visibility to clinical decisions and automating repetitive tasks as the top three ways they expect generative AI to enhance healthcare.  

The growing integration of generative AI with electronic health records technology offers an easy way to bring the technology to front-end users, with the hope of easing documentation burdens, respondents said. Of the executives surveyed, 70 percent said they have or plan to adopt AI solutions via EHR vendors because of the ease of integration.  

Still, organizations need to ensure that the technologies work and can actually achieve the goals that organizations hope for, says Jeffrey Jones, senior vice president of product development at UPMC Enterprises, the innovation, commercialization and venture capital arm of UPMC. 

“Before adopting generative AI technologies in healthcare, it’s crucial for executives to clearly define their objectives and establish measurable benchmarks,” Jones contends. “Regular evaluations are essential to adjust strategies as necessary. Generative AI is not a one-time fix, but a dynamic tool that requires attention and calibration.” 

Additionally, AI also was cited as the most improved technology by healthcare executives. Perceived improvement in AI is likely leading to greater adoption, which was reflected in a report published last June 2023. In that research, most organizations shared they have adopted at least some AI technology. 

Guardrails around AI 

The importance of governance, oversight of the data used by AI applications and monitoring of results are cited by those in the field as key attributes for ensuring the technology is used ethically and effectively. 

In his newly appointed role as chief health AI officer at UC San Diego Health, Karandeep Singh, MD, says one of his first priorities is taking a fresh look at the AI governance process. “Technology is rarely the only problem – usually it’s a mix of process, social issues and navigating how things are currently done and trying to change those,” he says. 

As generative AI leans more heavily on large language models to alleviate some of the hidden work that organizations want to offload, governance will be an essential step forward, Singh says. 

The Coalition for Health AI is looking to augment efforts to facilitate governance of the technology. The community of academic health systems, organizations and expert practitioners of artificial intelligence (AI) and data science has a mission of providing “a framework for the landscape of health AI tools to ensure high quality care, increase trust amongst users and meet health care needs,” the organization notes. 

The Mayo Clinic, a leading organization in the Coalition for Health AI, has a strong governance process in place, and tests AI models against its own database of patient information in a validation lab, says John Halamka, MD. “The only way we’re going to reduce bias and harms, and create the safety net you’ve described is to use large, distributed, heterogeneous datasets” from a variety of populations. 

Improving testing and supporting governance issues is also a goal for the National Artificial Intelligence Research Resource (NAIRR) pilot, described as “a first step towards realizing the vision for a shared research infrastructure that will strengthen and democratize access to critical resources necessary to power responsible AI discovery and innovation.” 

Partnering with federal agencies and 25 private-sector, nonprofit and philanthropic organizations, the NAIRR pilot will provide access to advanced computing, datasets, models, software, training and user support to researchers and educators. The effort is expected to “power innovative AI research and, as it continues to grow, inform the design of the full NAIRR ecosystem.” One of its goals related to governance includes advancing trustworthy development and use of AI. 


Return to AI BEYOND the Hype - March/April 2024 COVERstory.

More for you

Loading data for hdm_tax_topic #reducing-cost...