There is a lack of data collection, quality measures and performance indicators for community services at a national level. This historic problem continues to act as a barrier to strengthening the position of community service providers within the provider sector – as well as a barrier to improving patient care – because there is consequently less understanding of, and focus on, the community sector. Historically, the limited national data collection for community services has been an underlying reason for not strengthening their contribution to the health and care system. Community services therefore need national investment in an improvement approach to put them on an equal footing with other parts of the NHS provider sector.

There are currently no national measurements of quality or performance targets for trusts providing community services to meet, although trusts collect and use performance and quality indicators at a local level. Implementing national performance and quality indicators would be a double-edged sword, but trusts agree that overall they would have a positive impact.

The current lack of national quality indicators means it is harder to see the effects of operational and financial pressures on quality. These circumstances risk creating the conditions for the deterioration in quality of care to go unnoticed, as highlighted in the Kirkup Review (Kirkup B. February 2018). Dr Bill Kirkup raised concerns that community services are often underrated by national leaders and require a more strategic approach (Health Service Journal, March 2018). Dr Kirkup also flagged that while there is a perception at a national level that community services are inherently lower risk than acute services, this is not true – especially as growing levels of acuity are treated in the community. Our survey also raises this concern as trusts commented that quality is becoming more difficult to deliver because of financial constraints, demand pressures, and the legacy of the fragmentation of services and commissioning.

Trusts would welcome a renewed national approach to define what good looks like in community services. CQC has criteria against which it inspects and rates the quality of care delivered by community services. As of 31 July 2017, the majority of community services were rated as providing good (66%) or outstanding (5%) care. The highest number of requires improvement ratings were under the safe key question and CQC had some concerns about workforce shortages and variation of caseload size (Care Quality Commission, October 2017).

There are currently no national measurements of quality or performance targets for trusts providing community services to meet, although trusts collect and use performance and quality indicators at a local level.

   

However, this only paints part of the national picture. This is due to a lack of national quality metrics to capture and benchmark meaningful data on the value of care delivered by community services. While there is a small number of national data sources such as the friends and family test, staff survey and workforce statistics that give a rough idea of some aspects of quality in the community sector, information on the quality of care in community services is much more limited compared to other parts of the health care sector. A recent QualityWatch report pointed out that this lack of insight into quality is "concerning" when national policy aims to shift more care into the community (November 2017).

Previous attempts to come up with a common definition of good quality care in community services have made some progress, but lack national support and implementation. NHS Benchmarking provides a useful tool for trusts providing community services to benchmark their performance with peers in all aspects of service provision including activity, access, workforce, finance, quality and outcomes. Similarly, NHS Improvement has developed a set of community indicators (scorecard) that trusts can use to compare their performance to peers, regarding staff and patient experience, and how responsive and effective they are.

In the meantime, trusts have developed their own quality metrics, indicators and reports covering themes such as staff engagement, patient experience, safety and clinical effectiveness. However, there is variation across the sector in how they define their services, what they measure and how they report them. This means it is challenging to build an accurate picture of quality from the ground up. In our survey we asked all respondents how they would rate the quality of community healthcare currently provided in their local area. While 51% of respondents thought the quality of care was 'high' or 'very high' two years ago, this compares with 60% who think that currently and 56% who think that will be the case in one year’s time (figure 17). It seems that trusts feel there is plenty of high-quality care being delivered, but there is no national attempt to raise its profile.

Figure 17


The underlying challenge in all these attempts to create quality metrics is the difficulties relating to trying to collect outcomes data in community-based care: "Much of what community services provide involves long-term care that helps to prevent more serious problems… As such, it does not always lend itself to clear short-term clinical outcomes in the same way a defined episode of treatment for an illness might" (The King’s Fund,  December 2014). As care in the community is not about curing people but about providing long-term care and enhancing quality of life, patient experience measures are generally seen to be very important in measuring the quality of community services. Similarly, it is harder to define what a safe caseload looks like than a number of beds on a ward. The National Quality Board pointed out that a safe caseload depends on the size of the geographic area, population spread and population needs (January 2018).

However, it is important that the national bodies persevere with this work and develop a more accurate picture of what high-quality care looks like in the community sector. Having national-level quality metrics and indicators would raise the profile of community services and provide a sufficient evidence base for providers to improve and standardise services. The trusts we interviewed believed that while national quality indicators would not fully demonstrate the value of community services, they would be a step in the right direction. Trusts could use them as a performance improvement tool and evidence of the quality of services to commissioners, as well as to show the affect of funding squeezes on patient care. Trusts would also welcome a framework to measure, assess and benchmark the quality of care. However, these measures would be best across pathways and systems, rather than specifically for the community sector.

In addition, there are no national performance targets on which to assess and benchmark community service providers. This means that community services are not in the spotlight and it is harder to build an evidence base of operational performance and productivity. Many community service providers want to demonstrate productivity gains and eliminate unwarranted variation at scale through the standardisation of services, but are limited by delayed access to national operational efficiency programmes such as Getting it right first time (GIRFT).

Having national-level quality metrics and indicators would raise the profile of community services and provide a sufficient evidence base for providers to improve and standardise services.

   

The national focus on delayed transfers of care (DTOCs) last winter highlighted the role of the acute hospital sector, but capacity and patient flow significantly affect all parts of the NHS provider sector, and trusts that provide community services have a key role to play. They have been working hard throughout winter and beyond to tackle DTOCs in their own organisations and across local systems, although the nature of these whole system pressures have proved difficult to overcome. For example, some trusts have worked hard to improve how hospital and community teams communicate and organise discharges. This can include building hospital clinicians’ confidence in community services by creating joint pathways and protocols between hospitals and community-based settings.

Up to half the beds in some hospitals are now occupied by older people who are medically fit but face delays in getting into residential care or back home. The main reason for the increase in delays was patients waiting for continued non-acute NHS care, rather than social care. Trusts that we interviewed highlighted the importance of seeing care homes as part of the system’s overall capacity as it is hard for community service providers to reduce DTOCs if there is a dearth of domiciliary or social care providers. Community teams such as rapid response services have also been working hard to manage clinically avoidable admissions, although it has proven difficult for trusts to measure them and their impact.

Trusts that we interviewed highlighted the importance of seeing care homes as part of the system’s overall capacity as it is hard for community service providers to reduce DTOCs if there is a dearth of domiciliary or social care providers.

   

Once again, the trusts we interviewed agreed it would broadly be better for the community sector overall if there were national performance targets. While the national focus may bring improvements and increased recognition to the community sector, such targets can risk bringing about unintended consequences such as distorting clinical priorities. Due to the range of services and care provided by trusts providing community health service, any national metrics will need to be carefully considered to ensure they are specific and sensitive enough to be used across the sector, while also having value for local organisations.

One possible approach to creating national metrics would be to stratify the population according to risk, calculate the amount of resource needed to support a certain number of the population with similar health and care needs according to best practice, and then turn this into standard metrics. Trusts that we interviewed said that these kinds of conversations were harder to have since public health functions moved across to local authorities.

In summary, the lack of national performance targets and quality measures acts as a barrier to raising the profile of community services. Underpinning this barrier is the well-known fact that there is very limited national data on activity, quality and investment. These limitations make it hard to quantify changes in demand, activity, staff, funding and quality. It is also difficult to demonstrate the value and outcomes of care provided in the community at scale.

There have been attempts at a local and national level to address this longstanding dearth of national data. The community sector has previously worked on developing a common, evidence based clinical effectiveness framework which is bespoke to community services as the primary means of determining effectiveness and efficiency. This work was supported by the national bodies, but funded by individual trusts. While individual trusts are using this framework, it is not being used at a national level  and local datasets remain difficult to reliably compare with one another. NHS Benchmarking has also gone some way to address this problem, and local data collections are also used by providers and commissioners.

The lack of national performance targets and quality measures acts as a barrier to raising the profile of community services. Underpinning this barrier is the well-known fact that there is very limited national data on activity, quality and investment.

   

NHS Digital is currently developing a new national community services data set (CSDS) to address the lack of national datasets and provide a sense of patient outcomes and quality indicators. The CSDS is an activity based data set that extends the existing children and young people’s data set to include adults. It aims to provide national standards for data on patients using community services (demographics, diagnoses, care contacts, activities), help trusts benchmark performance at a national level, help CCGs assess the value and performance of services, and track variation and the flow of resources. The CSDS is still in development and tackling data quality issues, so it will take time to be useful. It needs to develop at pace and have a much broader coverage of services than its initial iteration.

All of the national bodies need to prioritise a national data collection like the CSDS to support the development of quality metrics and performance indicators. In this data-driven age, trusts are frustrated that issues with data collection and validation have meant community services have been delayed access to operational efficiency programmes such as Carter and GIRFT. There is currently no systematic way for community service providers to evaluate, evidence and benchmark their performance at a national level; they need national support to develop a consistent, meaningful evidence base so that they can prove their value of being front and centre of the future NHS.

The lack of standardisation across local performance and quality measures, and the limited national data collection, means that community services have historically been out of the limelight. In order to strengthen and expand community services, we need to have a national data collection that has the confidence of providers and commissioners behind it to ensure inroads are made into the task of securing effective commissioning, as well as the support of a framework and national policy ambition.

This data collection then needs to be used in a sustained effort to develop national quality metrics and performance indicators with the provider sector. Given that expanding community services and prevention are national priorities, there needs to be investment in developing a better understanding of the quality of care provided in the community and recognition of the value – on an individual, societal and financial level – of preventative measures.