When it comes to population health management, hospital and healthcare system IT and decision support experts all face a singular challenge: the speed with which they are able to access and analyze information-rich patient data.

This problem is exacerbated as patient health, financial and other pertinent information is stored on disparate platforms supporting different facilities or care environments ranging from acute care to ambulatory or long-term venues. Add to that the rate at which hospitals and healthcare systems are merging or announcing affiliate relationships, and there is a compute and storage nightmare that’s growing more significant every day.

The industry may be collecting more data on patients than ever before, but it isn’t doing a good job of getting it into the hands of the people who need it most. Pulling all of this information together and presenting it to the user – whether a physician, hospital administrator or even a patient – in a meaningful way is often a cumbersome, costly and time-consuming job. Hospitals that have been able to do this have spent considerable time and money deploying Health Level 7 (HL7) interfaces and complex data warehouses.

Even then, physicians say they typically have to wait months for a consolidated view of the population health data that can help them target at-risk groups and create treatment plans designed to reduce readmissions, increase reimbursements and improve patient outcomes. Just as population health management requires a significant change in our approach to practicing medicine, it similarly requires a significant change in our approach to data management.

Healthcare IT professionals traditionally have believed that data needed to be pulled from disparate data sources into a centralized location or data warehouse. Only then could it be summarized to help inform doctors’ and administrators’ decision making. However, with patient data growing at exponential rates, and with the numbers of acquisitions and affiliations being made, gathering and storing all patient data in a central location before tapping into it is not a prudent strategy.

A Better Way

By using a well-planned virtualization strategy as part of a federated approach to data management, healthcare organizations can leave information in its source system and deliver both the ability to view and leverage that information in rationalized formats to the ultimate end user without having to enact a complex migration plan first.

Typically, patient data has been held “close to the vest” by a small number of people in even very large healthcare organizations. This is why decision support analysts have stacks of requests for information that take weeks or months to fulfill.

Healthcare needs to take a page out of the playbooks of other industries that have successfully tackled similar problems. For example, e-commerce organizations need fast access to similar subsets of data so executives are able to quickly and appropriately respond to market changes and customer preferences. They have accomplished this through virtualization strategies that enable them to poll data across disparate systems in much the same way healthcare organizations now are trying to do.

Using similar strategies, healthcare organizations can reduce the overhead, cost and time-to-value associated with more traditional methods of aggregating, analyzing and storing patient data. In fact, by employing three important steps, healthcare organizations can begin to mimic the success of e-commerce and other data-intensive businesses that have successfully tackled similar issues.

Sequential Approach

First, consider the enterprise when architecting the data virtualization solution, and work with end users to understand their requirements and potential usage patterns. Using this information, healthcare organizations can thoughtfully plan the architecture to ensure short development cycles and ongoing agility.

Second, understand the organization’s data quality standards. If data quality is managed outside of the source systems (in downstream systems, for example), data integrity issues that frustrate users may arise.

Third, involve the organization’s data security team. Because data virtualization can provide more users with a wider array of data, it’s critical to apply the appropriate regulatory and organizational constraints.

Taking these three steps will enable healthcare IT pros to create a virtualized environment that will put the power of patient data in users’ hands much more quickly, efficiently, and cost effectively than has been the case before.

Data virtualization provides a secure, unified view of clinical and business data through a single point of access. Rather than taking months to build a model to test and query data, then spending time and considerable expense to move the data to a centralized location, virtualization enables healthcare organizations to build connections to disparate systems in days, access the data that’s needed, and thus inform clinical and business decisions with a greater level of agility than previously possible.

Through virtualization, healthcare organizations can reduce the time it takes to deploy population health analytics from months to weeks while saving the healthcare organization as much as 50 to 75 percent of the cost of more traditional data replication and consolidation methods.

Kim Garriott is the Principal Consultant, Healthcare Strategies, for Logicalis Healthcare Solutions, the healthcare-focused arm of Logicalis US, an international IT solutions and managed services vendor.

Register or login for access to this item and much more

All Health Data Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access