The methodologies of hospital ratings systems need to be open and transparent in order to alleviate consumer confusion, as well as to aid hospitals in their quality improvement efforts.

That’s the conclusion of a multi-institution study published in Health Affairs. Researchers wrote in their report that differences across rating systems make it difficult for patients to understand the true quality of a hospital and provide little guidance to hospitals on where they should focus their resources and improvement efforts.

The study analyzed four national rating systems to determine the overlap between high- and low-performing hospitals across the systems. The researchers found the four rating systems didn’t agree on the best or worst hospitals.

“Our team wanted to better understand how many hospitals were ranked as a high performer by one system but as a low performer by another,” said lead author J. Matthew Austin, an assistant professor at the Johns Hopkins Armstrong Institute for Patient Safety and Quality. “The lack of agreement across the four rating systems is a concern for consumers, mainly because the different ratings could provide a conflicting message on where to seek care.”

Austin and his co-authors compared ratings from U.S. News & World Report’s Best Hospitals, Healthgrades America’s 100 Best Hospitals, Leapfrog’s Hospital Safety Score and Consumer Reports’ Health Safety Score. Using data from July 2012 to July 2013, the authors defined high- and low-performing hospitals on each rating. The authors found no hospital was rated as a high performer on all four rating systems, and only 10 percent of the 844 hospitals rated as a high performer by one rating system were rated as a high performer on another system.

The confusion stems from each rating system using its own methods, having a different focus to its ratings, and stressing different measures of performance. Some rating systems evaluate hospitals based on their adherence to best safety practices as designated by accreditation boards and government entities. Others, however, derive their scores from reputation surveys, patients’ reported experience of care in the hospital, or some combination of all three measures.

“Each of the four rating systems measured a different construct, used its own rating methodology, and weighted each measure differently,” Austin said. “The differences in the measures used and the weights assigned to each measure are likely causes of the discrepancies among hospital ratings.”

To alleviate potential consumer confusion surrounding the ratings, the authors recommend the rating systems agree to transparently report key features of their ratings and make their methodologies fully transparent to the public and those being rated.

“Being able to replicate ratings can help hospitals understand where to focus their improvement efforts, and consumers can better grasp the meaning of ‘best’ or ‘safest’ to make more informed decisions about their medical care,” Austin said.

The study is available here.

Register or login for access to this item and much more

All Health Data Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access