Why legal questions surround the use of voice technology in care delivery

Physicians must remember that the emerging capability is a tool to help them do their job, not a standalone method for doing the job itself.


Voice recognition technology is on the rise—just ask someone with a smartphone or smart speaker—and, given the opportunities presented by artificial intelligence, the possible healthcare applications seem endless.

Imagine a traditional primary care physician visit during which the physician and patient engage in a direct and productive conversation with one another while an AI-powered voice technology listens in the background, handling the time consuming and distracting tasks associated with “charting.”

Given the popularity and usefulness of AI “assistants,” both healthcare providers and software developers are eager to explore potential use cases and to integrate these solutions into healthcare delivery. Despite the exciting potential for AI in healthcare, it is critical to carefully consider limitations created by an outdated legal framework and accompanying ethical pitfalls.

AI solutions are simultaneously outpacing healthcare regulation, and changing patient expectations and the “standard of care” healthcare providers owe to their patients. There is an increased expectation from patients and providers that technology will be used to deliver effective care when such technology is safe and available. But existing laws and regulations, accrediting standards and payer requirements—many of which were drafted long before AI interventions were possible in healthcare—do not address these solutions practically, if at all.

In the absence of overarching rules to help navigate murky legalities, organizations like the American Medical Association (AMA), the American College of Radiology (ACR) and other organizations have developed their own best practices to help guide healthcare providers on the utilization of digital health and AI solutions, and limit potential risks of patient harm and liabilities. But even these guidelines leave practitioners with a fragmented landscape devoid of consistent guidance.


One subset of AI is machine learning, which uses learning techniques to make inferences and find patterns among vast data sets. Machine learning algorithms can be implemented to help “train” a computer to “learn” to recognize images and patterns, organize information and make decisions with a relatively high degree of accuracy. When implemented by medical providers, machine learning techniques can help with personalized treatment programs, detect and diagnose disease, and predict models for epidemics and outbreaks.

Machine learning requires a certain volume and variety of feeder data to generate an effective algorithm. The collection of large datasets, particularly those involving health data, raises the issue of data privacy. While healthcare providers need to be mindful of data privacy laws, including the Health Insurance Portability and Accountability Act (HIPAA), pathways exist under HIPAA and other laws to ingest health data in a manner that can result in a robust dataset to feed the algorithms.

Effective implementation of machine learning in AI voice technology can help augment patient care to increase the degree of accuracy and improve the speed in which care is delivered. For example, deploying machine learning into voice technology solutions can help recognize a particular patient’s voice during a visit, or can detect variations in the voice against a patient’s voice pattern.

In other cases, deploying machine learning techniques in voice technology can provide functionality through which a conversation between a patient and his or her doctor is converted into text via word processing and voice recognition, and then inserted into the patient’s medical record.

One appeal of AI voice technology for physicians is its potential to reduce time spent on charting and other paperwork—these administrative activities can consume more than six hours of their already busy day.

Efficiencies might be gained, but not always without risk. It can take several tries for voice technology systems to get an accurate reading of a physician or patient voice, because of natural deviations, tone shifts and other factors—for example, a patient might have a cold one visit and not the next.

Accurate physician or patient speech capture is also not always guaranteed, as those who have sent voice texts can attest. And while a typo in a text is certainly no big deal, a similar error in a medical record could have significant implications when a physician relies on that information to make a diagnosis or prescribe medication.

In addition, to the extent those medical records are used for secondary purposes, such as research and post-market safety surveillance, errors generated by AI voice technology may significantly undermine the integrity of any resulting analysis.

While integrating AI into electronic medical records can be beneficial from a care coordination and administration standpoint, it is largely a legal gray area that raises a number of legal and ethical questions, particularly around patient consent.
  • Do patients need to provide consent every time their doctor uses AI voice technology to record their encounter?
  • Whether or not consent is obtained, what if the AI voice technology records information that a patient does not wish to be recorded?
  • How does a particular AI solution impact a patient’s expectation of confidentiality?
  • What are the limits to using patient recordings and data for research or other purposes?
  • How much must be disclosed to patients regarding the use of their data?
  • What options exist to withdraw consent to have one’s data a part of an AI dataset?
  • What are the implications under biometric privacy laws, and how might these technologies impact the evolution of such laws?
  • What internal practices are implemented to verify no biases exist in the algorithms?
  • When implementing AI into patient care, how is responsibility allocated between providers, developers and other third party actors?

These questions do not yet have defined answers, highlighting the need for physicians to tread carefully when implementing AI technology in their practices.

Another area where the convergence of these innovative technologies and EMR data could have an unwelcome impact is in preferential routing. Medical specialties like radiology use AI technology to filter images, directing physicians to view the most critical ones first. A similar concept has emerged in emergency department, where preferential routing, based on EMR data, can prioritize who should be treated first. The risk, however, is if the order determined by AI is incorrect, a patient with a non-critical condition could potentially receive treatment before someone with a time-sensitive or life-threatening injury.

In response to the increased pace of device development, the Food and Drug Administration is taking new measures to increase the speed with which new AI products are approved for the marketplace. This past January, the FDA announced its new Software Precertification (Pre-Cert) Pilot Program.

The FDA Pre-Cert program aims to take a different approach to software-based medical devices by looking at the company or developer first, instead of the medical device itself. Seeking to expedite the process for companies with demonstrated quality and an established record of success in the software space, the FDA would allow pre-certified companies to market certain products without additional FDA review. This move could have a significant effect in spurring additional innovation and lessening the cost and regulatory burden of developing software-based medical AI.

A significant portion of medical malpractice claims arise from patient injuries allegedly caused by a provider’s alleged failure to deliver care that meets the applicable standard of care. If a provider has access to innovative healthcare technologies and chooses not to use them, has the provider failed to deliver care that meets the applicable standard of care?

If courts consider the AI a “consulting physician” (i.e. the AI is not contacting the patient directly, but just helps the treating physician), it is unlikely that the requisite physician-patient relationship could be established for an independent medical malpractice case. See e.g., Hill v. Kokosky, 463 N.W.2d 265, 267 (Mich. Ct. App. 1990). This limit could be tested as AI advances and potentially achieves direct patient treatment capabilities (See Allain, Jessica S. From Jeopardy! to Jaundice: The Medical Liability Implications of Dr. Watson and Other Artificial Intelligence Systems, 73 La. L. Rev. 1049, 1062 (2013)

Additionally, if the AI voice recognition technology fails to accurately process speech or appropriately incorporate it into a patient’s EMR or such other intended provider workflow, leading to patient injury, how might the healthcare provider and technology developer be potentially liable as between themselves?

Based on current language in malpractice insurance policies, most insurance carriers would not cover misdiagnoses caused by an AI system. Similarly, AI developers tend to wash their hands of legal ramifications in licensing agreements, leaving end users—the providers—fully liable and without a safety net, raising larger questions about how to safely, effectively integrate AI into a practice.

There is a lot of promise for AI’s application in a clinical setting, though we’re still waiting for a legal guidepost to show how to safely incorporate it into medical practice. And though AI is certainly exciting and has potential to improve care delivery, it remains a “black-box” technology: we know what data goes in and comes out, but we aren’t privy to the processes in between.

That’s why it’s important for physicians to remember that AI is a tool to help them do their job, not a standalone method for doing the job itself. It may seem like a natural extension for a physician to bring their personal smart speaker into the office, but it’s important to remember that how virtual assistants are regulated and function in their living room is very different from the exam room.

More for you

Loading data for hdm_tax_topic #better-outcomes...