Are Contextual Errors Really A Mystery?

Clinicians have to deal with a high volume of patients who are hell-bent, intentionally or not, on flummoxing them every step of the way. So it seems a bit cruel to send in “mystery” patients who are getting paid to trip them up.


Clinicians have to deal with a high volume of patients who are hell-bent, intentionally or not, on flummoxing them every step of the way. So it seems a bit cruel to send in “mystery” patients who are getting paid to trip them up.  

The results of a recent mystery patient study are interesting, if not entirely surprising. But the study unintentionally opens the door to a discussion about the underlying tensions of our brave new digital health world:  Who’s in charge, the physician or the data? How far are we willing to push the health care nanny state that leading figures are proposing? And when will we ever get to the point we are rewarding physicians to be truly great caregivers, as opposed to rewarding them for being data aggregators?

About the mystery patient study, the results of which have been published in the Annals of Internal Medicine: Six Chicago-area medical centers sent “mystery” patients—actors following one of four patient scripts—into medical facilities to see whether social and economic “contextual” complications raised by the patients raised any red flags and/or affected treatment decisions.

For example, a 43-year-old man complains about recurrent asthma symptoms while taking low doses of an expensive, brand-name inhalant. During the encounter he mentions that things have been tough since he lost his job. Another example: A woman, overweight and suffering from mild hypertension, is in for a preoperative exam for a hip replacement. She mentions that she’s looking forward to having surgery so she can take better care of her son—an adult child with muscular dystrophy for whom she is the sole caregiver.

In the first example, the researchers deemed it a contextual error when a physician advised the patient to increase the dosage of the current medication, even though the fact he recently lost his job might make it difficult to afford it. In the second example, physicians erred by not raising concerns about how the woman would care for her disabled child during her recovery.

Researchers used audiotapes of the encounters and medical records to calculate how often physicians picked up on the red flags raised by contextual complications and then adjusted their care plans accordingly. They found that 78 percent of the time, treating physicians made “errors” by not picking up on the contextual complications.

This is what we think we know: we’ve clearly established that physicians don’t spend enough time crunching the patient data before making their treatment decisions. Now, we have research indicating that they don’t use enough of their physician skills—those healing skills that are the underpinning of their profession and their personal souls—to treat patients holistically.

Physicians, in the six to eight minutes they can afford to speak with a patient, truly are damned if they do and damned if they don’t. It’s not enough that they must use every available data resource at their disposal to make a treatment decision based on the numbers. Now they have to be a patient’s confessor, social worker, psychiatrist and big sister (or brother). And it must be done, mind you, in six to eight minutes.

The researchers running the mystery patient study suggested the results reveal a gap in physician training about how to identify and act on contextual issues. To me, it suggests that we as a people need to be trained on how to act like responsible adults: If I can’t afford a medication, I should have the sense to say “I can’t afford this medication” to the person prescribing it. If I have to care for a disabled adult and am having surgery, the error would be me not telling a physician about it, rather than consider it a shortcoming on their part for not dragging that “contextual complication” out of me.

Clinical quality is and seemingly forever will be measured in discrete data: ACE inhibitors prescribed, renal exams, annual mammograms, glucose readings, etc. On a macro level, that probably is the best a society can do: reward caregivers for treating the disease and use clear, concise, coldly analytical data to measure their success.

But how do we reward physicians for treating the patient and not just the disease? Many physicians have warned that applying what boils down to standardized test scores on the medical community is going to in many cases drive caregivers away from their true mission of healing and more toward a job akin to being a medical accountant. How do we measure the “quality” of a physician’s decision to spend extra time with a patient, to listen sympathetically and to help in the best way they can, as opposed to the way that provides the highest reimbursement?

I’m all for health I.T., but I want to make sure I draw a line between advocating technology that assists physicians and technology that owns them. The question I have about research about contextual complications and similar issues is not why physicians didn’t raise red flags, but how could we possibly expect them to? How much can we ask of caregivers in an environment where a heart-to-heart conversation with a patient is nothing but “free text” that can’t be coded, can’t be reported and can’t be measured? Free text doesn’t really have a place in a world of drop-down menus and order sets.

 

More for you

Loading data for hdm_tax_topic #better-outcomes...