We clearly can put a man on the moon. We did it six times. We have created vast networks of computers linking and transferring huge amounts of data (such as our enormous, quite complex, and nearly totally reliable financial networks). Wal-Mart, the National Security Agency, and Boeing (with fly-by-wire) have demonstrated the ability to analyze huge amounts of data to solve extremely complex problems in less than the blink of an eye. And most American males can fix almost anything with a roll of duct tape. But when it comes to Accountable Care Organizations (ACOs), are the technological requirements equivalent to a roll of duct tape or NASA 1962 redux? Or somewhere in between?

My colleagues and I recently attended an eye opening Webinar given by the Advisory Board Company on technology issues with ACOs. We also took a good look at a Computer Sciences Corporation whitepaper on ACOs. We certainly have no argument that there is a ton of information technology required to make ACOs scale and deliver on their objectives – better care for less costs.

But the immediate takeaway from the presentation and the whitepaper is that there is a grossly oversimplified depiction of just what is necessary in order to create a large-scale ACO information infrastructure.

It is a little like the Apollo program being broken down into: 1) build rocket; 2) stick three men in the rocket; 3) make sure aircraft carrier is standing by in landing zone; 4) “light the candle.”

In fact, the simplified ACO technical architectures hide several really complex information technology problems that have yet to be solved.

First, there is a notion that in an ACO a patient, not previously registered, can walk in for treatment. The presumption is that the treating physician has the patient’s medical records. The further presumption is that the patient could have a primary doctor, a previous acute-care stay, and maybe even some unconventional care. Embedded is the idea that an almost unlimited number of unforeseen individual patient health information silos (EHRs) can be accessed in real-time. That is quite an assumption.

The financial services industry is built on this concept and consequently I can use an ATM machine almost anywhere in the world to access my bank account in Punta Gorda, and I can wire money from that account to a vendor in Mozambique as easily as paying my mortgage on-line.

To perform this feat it takes a well-defined formalized information interchange protocol standard, and all the information silos have to be engineered to support the interchange protocol. Without a widely adopted standard interchange, and all the health information silos (EHRs) re-engineered to support it, I just don’t see how ACOs can scale or run efficiently.

The second big challenge is similar, but slightly different.  ACO infrastructure assumes clinical decision support based on a reasonably large “data warehouse” of various diagnosis, treatments, outcomes and case studies. Analysis of this data should provide a pretty good idea of what treatments work best in which cases. It should result in less tests and better quality of outcomes. But there are several obstacles.

In our first example (information exchange), it was adequate and preferable to have a defined method for exchanging bits of information in real time, instead of trying to build one enormous centralized national EHR repository. But in the case of decision support the data has to be pretty much assembled in one location (a data warehouse). Assembling and consolidating all this data is a very hard problem. The big challenge is that in each EHR the medical information is represented as data but there are also semantics and context. The EHR that first gathered and stored the data knows the context of the data as stored and the sematic relationships. To combine data from more than one database with different contextual meanings and sematic wiring requires unwinding these structures.

There is plenty of really good software from companies like Informatica, Pentaho, Talend, and IBM that not only defines and manages the contextual and semantic resolution, but can perform any necessary transformations. Such software even load the data into the analytical data warehouse. However, this is not a mindless, brainless, 100 percent automatic operation. It takes time and expertise to define the resolutions and transformations. And the complexity grows geometrically with the number of data sources. The rule of thumb is it takes twice as much effort to develop these transformations on two sources of data as it does on one. But it is 16 times harder on 4.

Finally, in any decision support environment the tightness of the feedback loop is critical to drive accuracy. The two best examples are Wal-Mart’s pricing and inventory management system, and the flight controls on a modern airplane. Both are extremely good and give the appearance of predictive capabilities, but the perceived accuracy is almost entirely dependent on instantaneous feedback. How short feedback loops get implemented in a clinical setting is hard to imagine. iPhone and other very inexpensive wireless diagnostic devices will help, but we are a long way away from the Star Trek sickbay.

Other ways of using analytics to predict results are more imprecise. Statistical analysis is just that – based on probability. And this raises our final concern: if a clinical decision support system is 99.9 percent accurate in determining when NOT do an MRI to detect a brain tumor, and the doc just saw the 1000th patient, who gets sued when the patient dies of the condition after not receiving the test?

I can envision clinical decision support eventually functioning as a tool to help educate and discover, but if it is going to be used at the point of care to compensate doctors to NOT do tests and procedures, we need to make sure we know what we are getting into. The first thing is some sort of blanket tort immunity.

Using clinical decision support systems as an intergalactic online Physicians’ Desk Reference or even a super-duper Harrison’s Principals of Internal Medicine is fine. But I think we a long way from computer medicine, or even expecting information analytics to be accurate enough to use as a tool to influence doctors to reduce tests under some sort of compensation arrangement.

Rob Tholemeier is a research analyst for Crosstree Capital Management in Tampa, Fla., covering the heath I.T. industry. He has over 25 years experience as an information technology investor, research analyst, investment banker and consultant, after beginning his career as a hardware engineer and designer.

 

Register or login for access to this item and much more

All Health Data Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access