The Compass

Misalignment in Healthcare Benchmarking and Performance

Written by Sanjula Jain, Ph.D. | July 28, 2024

You are currently viewing the public version of Studies. To unlock the full study and additional resources, upgrade your subscription to Compass+.

Last week, U.S. News and World Report released its 35th annual hospital rankings.1 While they have remained at the forefront of hospital ranking across the healthcare industry, the methodology has been scrutinized in recent years.2 Certain institutions have opted out of the rankings (e.g., Penn Medicine) and government officials have launched investigations into the questionable practices (e.g., receiving revenue from hospitals that are ranked) of U.S. News.3,4 In light of this pushback, some methodology changes were incorporated in order to rely less on subjective data, incorporate more outpatient and Medicare Advantage data and name the “best” hospitals in aggregate, rather than an ordinal list; but concerns still persist.

While consumers are the intended audience for the U.S. News & World Report rankings, the reality is that health economy executives benchmark their own performance based on these rankings.

In our conversations with clients, we are routinely asked how their organization compares to top-rated hospitals as defined by U.S. News & World Report like Mayo Clinic or Cedars-Sinai. Typically, the goal is to learn from these high-performing peers and elevate their own performance. The problem is that, regardless of adjustments made to ranking methodologies, hospitals continue to compare themselves against organizations that are fundamentally different with respect to the most important performance variable of all: market characteristics (e.g., competitive dynamics, population demographics, payer networks).

The Current Benchmarking Landscape Lacks a Holistic View

Benchmarking is defined as the process of measuring an organization’s performance against those of comparable organizations, with the goal of identifying internal opportunities for improvement.5 Health economy executives must correctly identify their “true peers” since benchmarking against an aspirational peer is ineffective for performance improvement.

Yet, benchmarking methods within healthcare have seen little evolution over time. Historically, traditional hospital benchmarking has not equipped health economy stakeholders with the ability to identify relevant hospital peers. The existing benchmarking resources, which rely primarily on quality measures coupled with subjective criteria, have received criticism from both clinicians and academics, with one group of researchers citing prevalent issues across lists, including limited data, a lack of data auditing procedures and varying methods for compiling and weighting measures.6

Existing hospital rankings and ratings provide ordinal scores or ordered lists (i.e., best to worst) based in part on a variety of quality-centric measures, including HCAHPS, 30-day risk-adjusted mortality rate and readmission rates. Over time, the “best” or “top” hospital lists have become an element in strategic planning despite being designed for consumer use. The U.S. News & World Report rankings aim to help consumers understand the “best” place to receive certain types of healthcare services, while Leapfrog Group scores hospitals on patient safety (Figure 1).7 Healthgrades provides a review of clinical outcomes across multiple conditions to identify the hospitals with the “best” outcomes.8 While CMS Care Compare is intended to educate patients and provide consumer-curated scores, it also is used to incentivize performance, with Federal reimbursement levels (i.e., Medicare, Medicaid) subject to change based on a hospital’s rating score.

The current ratings and rankings lack comparative elements, which leads hospitals, health systems and other health economy stakeholders to make arbitrary and incomplete parallels between a particular hospital and some of the nation’s “top” hospitals. Hospitals do not know how dissimilar they are to some of the nation’s top hospitals on different metrics, nor do they know which hospitals they are most like.

Moreover, the well-documented unaffordability of U.S. healthcare is a critical part of the longstanding “Triple Aim,” but none of the popularized benchmarking methodologies account for measures related to cost of care in tandem with quality (i.e., value). Do some hospitals have comparable quality but starkly different prices? Absolutely. There is no observed correlation between price and quality in healthcare services at the national level, as exemplified with the comparison of negotiated rates and 30-day mortality rates for COPD at acute care hospitals in Chicago (Figure 2).

Additionally, take Cleveland Clinic for example. Within its true peer group of hospitals, as determined mathematically using Trilliant Health’s evidence-based benchmarking tool, known as the SimilarityIndex™ | Hospitals, Cleveland Clinic’s quality score is 55.29 and its average price for a hip and knee replacement is $56,756. Yet, within the same peer group, Penn Medicine, also known as the Hospital of University of Pennsylvania, and Vanderbilt University Medical Center have not only higher quality scores at 62.55 and 71.18, respectively, but lower average prices for the same procedure (Figure 3).

All existing hospital benchmarking sources exclusively compare facilities nationally, rather than identifying peer groups. For hospital executives, understanding how their organization “stacks up” nationally is often interesting, rarely important and never actionable without objective and relevant benchmark hospitals.

Insight into a broader set of measures across comparable organizations are critical components of improving performance in demonstrable and material ways.


Thanks to Katie Patton for her research support.