NAM: Healthcare must proceed with caution in adopting AI

Artificial intelligence has the potential to transform and disrupt healthcare. However, the industry must beware of unintended consequences and not give in to marketing hype and profit motives.


Artificial intelligence has the potential to transform and disrupt healthcare. However, the industry must beware of unintended consequences and not give in to marketing hype and profit motives.

That’s the contention of a new National Academy of Medicine report, the authors of which describe the document as a “sober and balanced” assessment of the accomplishments, possibilities and pitfalls of AI in healthcare.


“While there have been a number of promising examples of AI applications in healthcare, it is imperative to proceed with caution or risk the potential of user disillusionment, another AI winter, or further exacerbation of existing health- and technology-driven disparities,” warn the study’s authors.

“Though there is much upside in the potential for the use of AI in medicine, like all technologies, implementation does not come without certain risks,” they add.

NAM study co-author Eneida Mendonca, MD, vice president for research development at the Regenstrief Institute, contends that AI has the potential to create unintended consequences and, as a result, must be subject to regulation and be ethically implemented.

“A regulatory framework would be better established proactively, rather than in response to specific issues,” says Mendonca. “Health systems must take steps to ensure the technology is enhancing care for all patients. System leaders must make efforts to avoid introducing social bias into the use of AI applications, which includes demanding transparency in the data collection and algorithm evaluation process.”

She adds that “general IT governance structures must be adapted to manage AI and, if possible, the technology should be used in the context of a learning health system so its impact can be constantly evaluated and adjusted to maximize benefit.”

The authors of the NAM study point out that AI tools are only as good as the data used to develop and maintain them, noting that there are many limitations with current data sources which are critical for delivering evidence-based healthcare and developing AI algorithms.

“The implementation of electronic health records and other health information systems has provided scientists with rich longitudinal, multidimensional and detailed records about an individual’s health data,” the authors contend. “However, these data are noisy and biased because they are produced for different purposes in the process of documenting care.”

As a result, the NAM study warns that bad data will only result in bad models. “There is a tendency to hype AI as something magical that can learn no matter what the inputs are,” observe the authors. “In practice, the choice of data always trumps the choice of the specific mathematical formulation of the model.”

Nonetheless, the expressed hope of the study’s authors is that AI will be the “payback” for the healthcare industry’s—and federal government’s—massive investment in widely adopted EHRs.

Still, Regenstrief’s Mendonca advises that the industry must focus on clinical safety and carefully monitor uses and outcomes as AI is integrated within EHRs.

“As we wrote in the National Academy report, ‘Virtually none of the more than 320,000 health apps currently available, and which have been downloaded nearly 4 billion times, has actually been shown to improve health,’” says Mendonca.

“The wisest guidance for AI is to start with real problems in healthcare, explore the best solutions by engaging relevant stakeholders, frontline users, patients and their families—including AI and non-AI options—and implement and scale the ones that meet our Quintuple Aim: better health, improved care experience, clinician well-being, lower cost and health equity throughout,” conclude the study’s authors.

More for you

Loading data for hdm_tax_topic #better-outcomes...