I.T. Behaving Badly
When Dean Sittig moved to Houston, he, like most drivers, was very comfortable merging to the left to get on the highway. But Houston's seven-lane freeways often require merging to the right. "I wasn't good at merging to the right, and I knew I had to be careful, especially if I was crossing multiple lanes," he says. Even after several years, he still exercises extra caution.
Sittig, professor of biomedical informatics at the University of Texas Health Science Center and a leading researcher on hazards in clinical information systems, says that with the rapid adoption of electronic health records, clinicians nationwide are having to learn to "merge to the right." They have to record and use information in unfamiliar ways that create new opportunities to make mistakes, and Sittig says extra training will go only so far.
"Most wrecks aren't caused by lack of driving skill," he says. "We don't re-train drivers after they have a wreck-we just tell them to be more careful." Most clinicians have the basic computing skills they need, but the "be more careful" part will take longer to internalize, and will need a team effort by vendors, provider I.T. staff and users.
While experts generally agree that electronic health records are better for patient safety than paper ones, there's growing recognition that they present their own challenges. In its annual round-up of top 10 medical technology hazards, ECRI Institute, Plymouth Meeting, Pa., ranked "data integrity failures in EHRs and other health IT systems" No. 4, with several other IT-related hazards also on the list.
ECRI senior patient safety analyst Erin Sparnon says hazards most often occur because of some misalignment between the system configuration and clinician workflows. Faulty programming or implementation can lead the systems to behave unexpectedly, and inadequate training has clinicians unprepared for how new I.T. will change the way they work.
For example, Sparnon recently published an analysis of 324 EHR errors reported to the Pennsylvania Patient Safety Authority that were connected with incorrect default values for medication order sets.
While none of them harmed patients, they easily could have. In some cases, a default "stop" order canceled an antibiotic a physician wanted to have continued. In others, patients missed a medication because the order entry system assigned the task by default to a staff member whose workflow hadn't been modified to include it. One out of five of the errors occurred because the default dose value didn't match what the clinician had ordered, and the ordering system gave priority to the default dose.
Another ECRI study of 511 chemotherapy order sets showed that one in 10 had been recommended for removal or consolidation, and all the others had at least one change based on the most current best-practice information. "Hospitals need to make sure they have a policy of requiring regular review and update," Sparnon says. "Knowledge about medications changes all the time, and new medications come in. Any standard order sets should reflect the most current thinking."
Machines and people
Sittig puts HIT hazards into two broad categories. In the first, the technology itself misbehaves somehow; in the second, the technology is doing what it's supposed to, more or less, but is a bad fit with the user's needs and habits.
Sometimes bugs are obvious. At St. Vincent Hospital, Erie, Pa., an interface between the HIS and the pharmacy dropped a crucial bit of information-which patients were pregnant or lactating. "That information is of supreme importance to make sure there are no adverse events," because any medications can affect both mother and child, says Lidia Giles, IT director of clinical applications. Her department quickly instituted a workaround that involved paging the pharmacy whenever a pregnant or lactating patient was admitted, and Giles says the defect has been corrected in the version of the software currently being installed.
Another system upgrade unexpectedly changed the appearance of a report, so that discontinued medications were no longer highlighted with a gray bar. "Users were used to seeing this very nice, clearly evident bar, and we had to educate them to look for the date and time stamp instead," Giles says. "During upgrades, so many pieces are changing that it's easy to take away something good." The vendor eventually showed the staff how to restore the look everyone was used to, but it was a confusing few days.
Susan Boisvert, senior clinical risk management consultant for malpractice insurer Coverys, Boston, recommends that end users be notified about all system upgrades and other software changes, no matter how minor the IT department believes those changes to be. "Everyone should be on the alert," she says. For example, an upgrade can accidentally cancel things like automatic antibiotic stops, leaving patients on a medication for too long.
You May Find This Useful
FDASIA Health IT Report: Proposed strategy and recommendations for a risk-based framework:
A joint report by the FDA, ONC and FCC on how HIT should be regulated proposed, among other measures, creation of a Health I.T. Safety Center by the three agencies and the AHRQ.
The final report on the beta test of the Hazard Manager, developed by Geisinger Health System and Abt Associates, funded by AHRQ, and now included in the safety event-reporting platform of the ECRI Institute PSO and a consortium of other patient safety organizations.
Interfaces are another treacherous point. Even when they're working perfectly, Sittig says they often drop key information by design. "There's a lot of pushback on the size of the buffer," he says. "I think I'm sending 100 characters, but you only accept 75, so it gets truncated at the beginning or the end. Until you know others are having the same problem, you think you've made a mistake."
What Sittig calls "sociotechnical" problems-those where users and systems don't understand or accommodate each other well enough-are more common. Chuck Christian, CIO of St. Francis Hospital, Columbus, Ga., recounts an instance where a system for organizing nursing care contained conflicting order sets for monitoring insulin. One physician's order set might require a vital sign check every hour and blood glucose levels every two hours, while another's would specify both vitals and glucose checks every four hours. Sometimes the two order sets would be applied to the same patient, causing confusion. "We pulled them out of production and our nursing informaticist cleaned them up," he says. The hospital has now standardized its order sets for insulin monitoring.
Randy Osteen, associate CIO at Christus Health, Irving, Texas, agrees that if a system doesn't fit well with the nurse's workflow, the system will usually lose unless managers step in to mediate. For example, the "right patient" part of medication bar-coding is supposed to be accomplished by scanning a bracelet that's firmly attached to the patient. Sometimes those bracelets get wet and hard to read; sometimes the patient is asleep or uncooperative or in the bathroom. Nurses may find ways to get the patient scanned anyway. "If you see a bunch of patient barcodes taped to a medication cart, or stuck to the head of the bed, you know something's going on," Osteen says.
Hardeep Singh, M.D., researcher at the Veterans Affairs Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, and Sittig's frequent collaborator, says the sheer volume of information coming at clinicians creates its own hazards. "It's so easy to send an FYI to a clinician, even if it doesn't require action, that we're getting an overwhelmingly huge amount of information that drowns the signal," he says. Alerts and reminders-too many to deal with during a 15-minute office visit-add to the overload.
Talking about trouble
Several expert bodies, including the FDA, have recommended some type of centralized repository of clinical information system hazards, so users have someone to tell when things go wrong. As the repository grows, it could be used to check for possible problems before system selection or during implementation. Users would also have a way to find out whether their bad experience is unique or part of a larger pattern.
ECRI's Patient Safety Organization has recently established such a database, the Hazard Manager, originally developed at Geisinger Health System, Danville, Pa., and later expanded and tested with a grant from the Agency for Health Research and Quality. The Hazard Manager offers a way to report IT-related incidents in a structured way that allows searching and analysis, despite the wide variety of hazards and the complexity of their causes. In March, ECRI's PSO further leveraged its work on the Hazard Manager by launching the Partnership for Promoting Health I.T. Patient Safety, a collaboration among health IT vendors, providers, professional societies, PSOs and policy makers.
The Veterans Administration already maintains an IT hazard for all its facilities, and Hardeep Singh, who has worked extensively with the data, says that just having the database isn't enough. "You need a team of people to figure out what really happened," he says. The VA's multidisciplinary team analyzes each incident and figures out how to correct or at least mitigate the problem.
Susan Boisvert of Coverys is enthusiastic about ECRI's efforts and would like to see something similar adopted by malpractice insurers.
"Right now you can't go to a large medical malpractice claims database and ask which [claims] are computer-related, because they're not coded that way," she says. "Big organizations are starting to revise their coding practices so that data is available, but there's a long lead time [on malpractice suits], so we're just starting to see claims with an EHR component."
Ross Koppel, a medical sociologist at the University of Pennsylvania and a leading crusader on health IT hazards, says the Hazard Manager is great as far as it goes, but it will only capture the hazards users are aware of-a number he claims is the tip of the iceberg.
"Doctors are busy and don't stop to report how lousy or confusing the CPOE [computerized physician order entry] system is," he says. "Most importantly, it's extremely difficult to understand and perceive the difficulties with the HIT as hazards, because the doctors get very little training and just have to use them. It's unclear what's lousy HIT and what's insufficient training. I worry that hazard managers will create a false sense of security."
ONC's SAFER Guides: Starting Point for IT Safety
Once you're sufficiently worried-either by your own experiences or those here, your next stop should be www.healthit.gov/safer/safer-guides
Developed by Dean Sittig of the University of Texas, the VA's Hardeep Singh and Joan Ash from Oregon Health Sciences University, the SAFER guides were released by the ONC in January, as both PDFs and interactive web tools, to help providers diagnose and correct sources of danger in their EHRs. (SAFER stands for "Safety Assurance Factors for EHR Resilience.")
Some of the recommended actions can be executed internally; others involve working with software vendors to make changes and head off problems. "Vendors can't do this alone, and the user community can't do it alone, but we can do it together," Singh says.
The nine guides cover the following topics:
* High-priority practices
* Organizational responsibilities
* Patient identification
* Computerized physician order entry
* Test results review and follow-up
* Clinician communication
* Contingency planning
* System interfaces
* System configuration
Most providers won't have the personnel, time or (perhaps) need to work through all nine guides, but the first two can help establish overall best practices, and the others can be used to address specific problems.
Also of interest: The Electronic Health Record Association's response to the release of the guides. Look at this for clues as to where vendors might push back on some of the guides' recommendations.