Study: Hospital quality ratings mislead, misclassify performance
Publicly reported hospital quality rating systems offer conflicting results that may only serve to mislead patients.
That’s at cross purposes to their intent of helping patients make informed decisions about medical facilities, according to the findings of an analysis, published this week in NEJM Catalyst, by “physician scientists with methodological expertise in healthcare quality measurement.”
In particular, the authors point out that “hospitals rated highly on one publicly reported hospital quality system are often rated poorly on another,” which “provides conflicting information for patients seeking care and for hospitals attempting to use the data to identify real targets for improvement.”
Researchers assessed the strengths and weaknesses of four major public hospital quality rating systems—the Centers for Medicare and Medicaid Services’ Hospital Compare; Healthgrades Top Hospitals; Leapfrog Hospital Safety Grade and Leapfrog Top Hospitals; and U.S. News & World Report Best Hospitals.
“While no rating system received an A or an F,” researchers reported that “the highest grade received was a B by U.S. News & World Report,” followed by a C for CMS Star Ratings, a C- for Leapfrog and a D+ for Healthgrades.
“We qualitatively agreed that the U.S. News rating system had the least chance of misclassifying hospital performance,” they added, noting that although the Consumer Reports hospital quality rating system was included in their evaluation, they deleted mention of it in the manuscript because it “is no longer conducting or posting its hospital quality evaluations.”
Also See: Hospitals come up short in new CMS rating system
“Each rating system had unique weaknesses that led to potential misclassification of hospital performance, ranging from inclusion of flawed measures, use of proprietary data that are not validated and methodological decisions,” according to the authors.
In addition, they noted that there were “several issues that limited all rating systems,” including “limited data and measures, lack of robust data audits, composite measure development, measuring diverse hospital types together and lack of formal peer review of their methods.”
To address these shortcomings and advance the field of hospital quality measurement, the authors called for “better data subject to robust audits, more meaningful measures, and development of standards and robust peer review to evaluate rating system methodology.”
Specifically, they found fault with all the hospital quality rating systems for relying on administrative data or self-reported data, which the paper describes as having numerous limitations.
“All rely heavily on Medicare claims data, which represents an important part of the population, but using all-payer data would present a more accurate and complete representation of quality,” the authors contend.
While they commented that the incorporation of registry data would be “an ideal alternative to overcome the limitations of administrative data,” the researchers pointed out that “the abstraction required is laborious and expensive, and this results in only a fraction of hospitals participating in most registries, with only a fraction of a hospital’s relevant cases being captured by the registry.”
A better solution, they said, would be “obtaining data directly from the electronic health record to support valid, meaningful quality metrics.” But, the authors added that despite all the talk about healthcare IT interoperability in the industry, “little has been translated to meaningful national quality measures.”