NHSProviders homepage

NHS league tables: why thoughtful design is essential

1. Do league tables provide an accurate and objective account of organisational performance?

By design, NHS league tables present highly complex information in a simplified and digestible format. The unenviable challenge lies in how you ensure that this simplification - translating the performance of a diverse organisation into a single score and rank - remains both meaningful and accurate.

This year, rankings are based on the NHS Oversight Framework. As a result, their credibility depends heavily on the strength of the framework's methodology. Feedback from trust and integrated care board (ICB) leaders on interim segments - which have been shared with individual organisations privately - indicates the approach does not yet achieve the right balance. We have heard multiple examples of organisations being rated poorly that are locally regarded as high performing, with the opposite true for trusts known to be experiencing major challenges. 

These discrepancies underscore the importance of the government adopting a culture of thoughtful scrutiny when it announces league tables. Politicians, the media and the public should be discouraged from making snap judgments, as these can have serious and lasting consequences for organisations, staff and individual leaders.

To increase the accuracy and objectivity of league tables, DHSC and NHSE must establish a robust and transparent process for gathering feedback. This process should focus on understanding the gap between the experiences of leaders locally and the outputs of the Oversight Framework. Insights from this process should both inform immediate adjustments to the metrics and methodology and the development of the 2026/27 framework. While teething issues are worked through, trusts, ICBs and NHSE regional teams should also feel confident that concerns about the factual accuracy of ratings can be escalated in a robust and consistent manner.

Through our engagement with trust leaders, we have identified the following three issues that risk undermining the trust in the government's league tables: 

  1. The financial override is too blunt an instrument
  2. The availability of performance data is poor, with variation between and within sectors
  3. Parts of the scoring methodology lack clarity leading to confusion and a lack of transparency

Financial override

The 2025/26 Oversight Framework includes a financial override that prevents providers in deficit - or receiving deficit support - from being rated above segment 3. While trust and ICB leaders support the short-term focus on financial recovery, many believe that the current binary - in deficit or not - approach is too blunt. It risks affecting a disproportionately large number of organisations, while also failing to account for legitimate, often strategic, reasons that a trust may report a deficit.

For instance, system-wide deficits are sometimes informally and opaquely redistributed across higher-performing providers. This typically involves agreements between trusts and their ICBs to set more ambitious financial targets, helping to balance the system’s overall financial position. These arrangements place greater financial risk on the trusts involved.

We have heard multiple examples where the financial override has penalised trusts for taking this collaborative approach. Consequently, trusts with previously strong financial performance have been downgraded from top to bottom segments overnight. 

To address this, NHSE should urgently refine the financial override metric. A more nuanced approach is needed - one that can distinguish between organisations genuinely struggling to recover financially and those making strategic, system-focused decisions. The override should reward, not penalise, those displaying the collaborative leadership that the NHS needs more of to successfully recover and reform. 

Data quality

The reliability of any performance rating also depends on the quality of the data behind it. Concerns have already been raised about the accuracy and consistency of the data used to inform league tables and NOF segmentation, with a risk of variation within and between sectors. 

We expect that some of these challenges are short term and can be resolved as trusts adjust to the new performance regime, with support from their regional teams where needed. A degree of short-term discomfort is both expected and healthy if it motivates trusts to strengthen their data collection, analysis and reporting.

However, in some instances, the variation in data quality stems from systemic issues beyond the control of individual providers. Many mental health and community trusts, for instance, lack the infrastructure to produce high-quality performance data. This is a structural issue rooted in long-term underinvestment and the use of block contracts compared to payment by results. It means there is often a lack of data analytics resource and poorly designed data management systems.

To mitigate these issues, DHSC and NHSE should avoid using metrics where data quality is known to be unreliable due to systemic constraints, while providing trusts with additional support to make the required improvements to their data infrastructure and capabilities. 

Clarity on the scoring methodology

There is a lack of clarity and confusion among some trust leaders about how metrics and segments have been determined, which risks unnecessarily undermining trust in the ratings system.

One key issue raised is the use of a sequential scoring methodology for metrics that lack clearly defined standards or benchmarks, i.e. where there is no agreed definition of what good looks like. This approach involves evenly distributing organisations across scores of 1 to 4 based on their position in a ranked list. When performance against a metric is closely clustered across trusts, this approach can cause small variations to result in disproportionately large differences in scores and segment placements. It also means that a provider whose performance remains unchanged may appear to improve if the performance of other providers declines. 

There are also concerns about the application of improvement-based metrics, such as percentage increase measures. These only take into account year-on-year changes, which overlook organisations that have delivered steady, consistent progress over several years, while disproportionately rewarding those that have recently improved from a lower baseline.

We welcome the commitment to introduce statistical process control (SPC), which will help demonstrate statistically significant improvements or deteriorations across metrics and offer valuable context. However, we would welcome greater clarity on how individual improvement metrics will account for longer-term trajectories.

To ensure greater transparency and to help trusts understand their ratings and rankings, we urge NHSE to publicise the technical guidance that details the specific scoring regime for each metric. This should be accompanied by a mechanism for healthcare leaders to provide feedback as the framework is implemented. 

Clearer guidance on how to measure each metric would also help reassure trust leaders who are concerned about inconsistencies in how different organisations interpret and report data. While some variation is inevitable, it is essential that organisations feel they being assessed consistently to maintain confidence in the process.