Data integration is an important aspect of heath data management. However, it’s often overlooked as a core health care I.T. strategy. This is largely due to the fact that we’re used to dealing with data integration as tactical one-off projects, using whatever mechanisms work to solve the problem. 

Many providers have a bit of a mess on their hands when it comes to extracting and externalizing key information. There is no common integration approach or technology, and there is also no or little semantic understanding of the data, not to mention ownership, security, or governance. 

I’ve felt strongly about this for years.  Indeed, I’ve written three books on application and data integration, including “Enterprise Application Integration,” back in the 1990s. The idea is to put some more architectural thinking around how we manage data movement in and between systems, data stores, and humans. 

In the world of health care I.T., there is a focus on portals. These portals are built to meet meaningful use selections 170.304(h) and 170.306(d) provide patients with the ability to access clinical summaries, as well as health information. If the portals are well built, they facilitate communications with patients and doctors using a common set of data, and they put the data into a more understandable context for the doctor. 

I hit upon the basics of patient and physician portal development a few months ago in this post titled “Physician Portal Development, from the Data Up to the Human” and this post titled “Leveraging Public Clouds for Patient Portal Development.”

Moreover, the use of these portals increases the amount of meaningful data that’s available within the portals. This includes leveraging analytics built around local and remote big data systems that provide the ability to validate diagnostics and treatments against massive amounts of historical treatment and outcome data. 

This all leads to patients and physicians, if done correctly, working with near perfect information.  However, it’s a long road from where we are now. Perhaps it’s time we got started. 

Step 1:  Understand the Data.

The first step in this process is to gain an understanding of most of the managed data that’s relevant to use within the portals. Typically this is a mix of both structured and unstructured data, as well as data that’s locked-in proprietary clinical systems or other packaged systems you may have that drive much of your automation. All have varying degrees of security and governance layers upon the information, which must be understood as well. 

Items that should be understood include semantics, structure, source, owners, dependencies, etc.  The output from this step can vary, but a metadata list is typically included.

Step 2:  Logically Group the Data.

Once the data is understood it’s a good practice to group the data logically, typically around entities such as patient, treatment, research, diagnostics, etc. This is not a physical grouping, thus you may have a single group that spans many source systems and data stores.

We’re going to use this logical grouping to better define the data around use and purpose.  This will make security and governance easier to establish. 

Step 3: Define Interfaces and Access Approaches.

In this step we define paths to data access, or, the interfaces we must leverage to consume the data, or even write back to the data. In the world of integration, these interfaces can vary from very well-defined APIs or services, to very primitive mechanisms.

In this step we define how we will approach the data interface, or the method of consumption. We’re also going to define the interface mechanisms, including the underlying technology we plan to leverage. 

Step 4:  Define the Analytics Services. 

Once we have an understanding of what the data is, where it is, how to access it, and even logical grouping of the data, we can now define the analytics which are typically externalized as analytics services. For the purposes of portals, these are complex abstractions of the data around core health care analytics, including predictive analytics that leverages most, sometimes all, of the data identified in steps 1, 2, and 3.

These are typically analytics services (usually Web services) that front-end heavy analytical processes. For instance, the ability to determine the heart attack risk for patients based upon their own data as analyzed against historical diagnostics and outcomes data. Moreover, the capacity to produce research in terms of ways to alter the risks, including the ability for the portal to monitor and manage exercise and nutrition regimens. Perhaps even gather data from telemetry devices, such as heart rate and calorie consumption monitors. 

As you may have guessed, a great deal of thinking should occur around this step. In large part, the analytics placed around the data determine the value of the information externalized to either patients or physicians. 

Next month we’ll continue on to the remaining steps.

David Linthicum is an SVP with Cloud Technology Partners, a cloud computing consulting and advisory firm. David’s latest book is "Cloud Computing and SOA Convergence in Your Enterprise, a Step-by-Step Approach." His Web site is www.davidlinthicum.com/ 

 

Register or login for access to this item and much more

All Health Data Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access