The quality and trustworthiness of economic statistics have been in the news on both sides of the Atlantic. Many of the issues have emerged since the pandemic, in particular as a result of low survey response rates. This form of data collection remains essential for understanding the labour market.
No one who pays attention to news about the economy will have missed the recent spell of stories about the quality of economic statistics, not just in the UK but also internationally.
From the results of the main survey of the UK labour market being pulled ahead of their scheduled publication in October 2023 to the sacking of the commissioner of a leading US statistical agency in August 2025, it seems that there has been a constant series of stories about economic data quality across the globe.
What are the big challenges of economic data collection, what has happened in the UK with the Labour Force Survey (LFS) and what should be done? These questions are the focus of this article.
How are economic statistics produced?
Most of our economic statistics come from, and have always come from, surveys of individuals, households and businesses. There are good reasons for this – primarily that the sort of information that we need to measure and understand particular phenomena are rarely collected otherwise.
Take a simple question: how many people in the UK are unemployed?
You might think that the answer is the number of people receiving unemployment benefits. This measure is referred to as the ‘claimant count’, and it can be calculated using data held by the government to administer these payments – what’s known as ‘administrative data’.
But what about people who are unemployed but not claiming unemployment benefit? What about those in receipt of unemployment-related benefits because their pay is low? Do we want to include them in the calculation of the unemployed?
And what if the government redefines the eligibility criteria for unemployment benefit: does this mean that people who are no longer eligible now stop being ‘unemployed’?
In practice, we need a measure that is not sensitive to how governments define benefit eligibility. We also want to capture those who are unemployed but not claiming unemployment-related benefits.
This is why the unemployment rate is based on a definition provided by the International Labour Organization (ILO) where the unemployed are:
- Those who are without a job, who have been actively seeking work in the past four weeks and who are available to start work in the next two weeks.
- Those out of work who have found a job and who are waiting to start it in the next two weeks.
This is a broader measure of unemployment than the claimant count, and one that cannot be ascertained from administrative data. We need a survey to learn about people's availability and job search activity.
Countries typically run a survey to produce a measure using this definition of unemployment, providing a metric that is comparable across countries. In the UK, this is called the Labour Force Survey (LFS). It is this survey that has been the focus of much criticism in recent months.
In constructing estimates of other key labour market metrics, similar issues emerge, including when measuring employment, self-employment and hours worked. While administrative data might provide us with insights that are ‘good enough’ for some purposes, they are not – as things stand – able to provide the information necessary to construct measures consistent with the ILO definitions. We still need surveys.
What has happened with the UK’s Labour Force Survey?
A basic requirement for a functioning survey is that the number of surveys issued is large and the number of people who then complete them is high (that is, the response rate is high). This is especially important given our reliance on surveys to provide key labour market statistics.
A high response rate provides a large amount of data and hence better represents the population we’re trying to understand. The statistical mechanics of these surveys rest on having a large sample size. Intuitively, you can think of a higher response rate as providing more precision around key metrics.
One of the problems with almost all surveys is that response rates are dropping. This is likely to be the result of some combination of ‘survey fatigue’, people unwilling to respond because they distrust government or are uncertain about how the data will be used, people living busier lives, and people being harder to reach than they used to be. There are things that can be done to try to mitigate these issues, but it remains harder and harder to achieve high response rates.
Figure 1 illustrates the number of interviews the Office for National Statistics (ONS) has achieved for the LFS over time, and the big drop towards the end of 2021 and continuing through to the same point in 2023. The effect of this decline in responses is that the reliability of the data that the survey produced has also dropped.
Figure 1: Achieved interviews for the LFS, 2015-25
Source: ONS
The ONS produces the LFS by surveying a random sample of addresses from the Royal Mail postcode address file (with some adjustments). Except for the area north of the Caledonian Canal in Scotland where telephone interviews are used, staff are then sent to visit these households to complete the survey for the first time. Subsequent interviews take place by telephone (households are interviewed five times over a 12-month period at three-month intervals).
During the pandemic, in response to public health restrictions, the ONS switched to using telephone interviews only. This presented an immediate challenge for the ONS, which had to obtain a telephone number for a household before it could ask them to participate in the survey for the first time. This resulted in a drop in the achieved number of interviews, as Figure 1 shows.
To address this, the ONS doubled the sample of households used from July 2020, arresting the immediate decline in the achieved interview numbers shown in Figure 1. A subsequent series of decisions taken by the ONS then resulted in a more severe decline in the number of interviews than occurred at the onset of the pandemic, including a reduction in, and then elimination of, the sample boost.
In the face of the declining response rate for the LFS – something that was known but perhaps not properly understood by the ONS leadership at an early enough stage – the ONS then tried a range of different strategies, including:
- reinstating the sample boost;
- increasing the incentives that households received to complete the survey from £10 to £50;
- returning to undertaking face-to-face interviews;
- recruiting additional interviewers;
- and changing how they weight the survey (more on this below).
These changes have led to an increase in the number of interviews completed, but figures remain below pre-pandemic levels. This is not only a phenomenon in the UK: the decline in survey participation is a global challenge.
Figure 2 shows the response rate for the main US labour market survey – the Current Population Survey (CPS) – which is down from over 90% in 2010 to below 70% today, alongside the response rate for the UK LFS.
Figure 2: Response rate for the US CPS and the UK LFS
Source: ONS and US Bureau of Labor Statistics
Trust in statistics
While further work could undoubtedly be done to lessen the burden for people in completing the survey, and new routes could be found to reach additional people, the payment is reasonable given the task involved. One area where more could perhaps be done is in building trust and confidence among the public around contributing to official economic statistics.
In this context, developments in the United States are troubling, particularly the president’s decision to sack Erika McEntarfer, the head of one of the country’s main statistics agencies (the Bureau of Labor Statistics, BLS), accusing her of political bias in the production of official statistics.
Trump’s allegations that the ‘job numbers were rigged’ emerged after routine, if large, revisions were made to payroll employment numbers in the United States. As prominent economist David Autor of the Massachusetts Institute of Technology (MIT) put it:
‘Revisions are routine, and tend to be larger when economic conditions are changing rapidly. McEntarfer is a non-partisan economist and BLS' procedures are consistent, credible, and not subject to manipulation. No evidence supports the slander that BLS estimates were ”rigged”.’
Unfortunately, many who do not understand the revision process, and may not have much trust in government, may well believe that they are.
Revisions are often the result of more survey data being available. Initial estimates are based on the available data at that point in time, but these are revised as more and more survey responses come in. This is perfectly routine.
Another reason for revisions – and something that has been more evident in the UK – is due to changes in the survey weights that are being applied.
What are we weighting for?
By their nature, survey data are produced using surveys completed by a sample of the population. In order to understand what the survey responses suggest about the population as a whole, every respondent is given a ‘weight’.
These weights, when aggregated, ensure that the survey data match the population data (that is, the number of people in the country and the breakdown by age, sex, region and other characteristics). These weights need to reflect features of the sampling design and what’s known as ‘non-response bias’. The latter has become more important over time.
Non-response bias in the survey is where those households that do not respond to the survey are systematically different from those that do. For example, respondents to the survey may be more likely to be owner-occupiers than renters.
This bias is not a new concern – certain groups have always been harder to survey than others – but the move to telephone-only surveying added a new dimension. This presents a further challenge that the ONS, and other similar statistical agencies need to deal with.
The ONS had several reweighting exercises following the pandemic, beyond what it would typically do to align the survey to updated population estimates.
In October 2020, it reweighted to consider housing tenure given changes in response rates among renters relative to owner-occupiers. In May 2021, it used HM Revenue and Customs (HMRC) data to address identified differences in response rates between UK and non-UK-born individuals.
In June 2022, it then made further adjustments based on the HMRC data, and introduced a non-response bias adjustment for Northern Ireland. It then found an error in the reweighting for Northern Ireland, which it updated in late 2024.
Revising the population weights in survey data in light of population changes is a standard part of statistical production, and it is not something that should raise concern. But the number and size of some of the changes have given rise to increasing scrutiny of this feature of the statistical production process.
How do we fix the surveys?
Admin data to the rescue?
Given the problems with survey data described here, it seems reasonable to ask whether we could make greater use of administrative data. For example, this could be done using data from the tax system on how many people are on company payrolls (‘payroll data’), and what they earn, instead of the data from the LFS.
The main objection to doing this is that these data do not align with the official definitions discussed above, and hence cannot displace the LFS assessments.
In assessing total employment, we need to know about both employees and those who are self-employed – but only the former are captured in the payroll data. There will be administrative data on self-employed people, but these are not reported to HMRC for some time. This makes it difficult to track what is happening in the economy in a timely way.
While the LFS estimates were suspended, the ONS itself used changes in two main administrative data sources to ‘roll forward’ estimates of employment (using payroll employment data from HMRC) and unemployment (using data on unemployment claims from the Department for Work and Pensions, DWP). This was a temporary, and far from perfect, fix that was certainly not a panacea for the wider woes of the LFS.
These administrative data are important indicators in their own right and they enable much more detailed analysis, given the far larger sample that they provide. This is particularly the case for more local-level labour market statistics.
Using data from a national survey like the LFS can mean that results at a local level are based on small samples of data, whereas the administrative data (say on payroll employment) should contain all the people in a given area who are paid via a company payroll.
Administrative data also provide a sense-check on the trends emerging from LFS data (as analysts at the Resolution Foundation recently explored).
But administrative data are not able to substitute for the sorts of information that a functioning LFS can provide, not just about the characteristics of workers (their education level, their desire to work more/less, etc.) but about the nature of work (on-the-job training, managerial responsibilities, etc.).
Working with administrative data for statistics production also presents issues. These are large data files that can be challenging to work with; their use can require different skill sets than working with traditional survey data; and they can contain errors and require cleaning to make them usable. All of this requires time and resources.
Given that these data are not able to substitute for headline labour market measures, resourcing the production of robust LFS data must take precedence in order for us to have a clear understanding of developments in the labour market.
A transformed survey?
In anticipation of some of the challenges with engagement around the LFS – and a much wider user need for information on the labour market – the ONS kicked off a project to develop a ‘transformed’ LFS (TLFS).
The TLFS was designed to be completed online by participants, and in response to user feedback, included a broader range of topics. It was put into the field in March 2020 in response to the pandemic. At the time of writing, it remains operational but only in testing mode, with the latest update suggesting it will be used for headline labour market statistics at the end of 2026 at the earliest. In September 2022, the ONS was expecting this ‘go live’ date to be September 2023.
There have been a number of challenges with the TLFS design that explain this significant delay. These include the design of a survey that was far too long and complex in its earlier form, resulting in participants dropping out before completing many of the questions.
This led to the creation of two versions of the survey, a ‘core’ and a ‘plus’, with the former designed to provide core labour market information in an average of 15 minutes. The ‘lessons learnt’ review published in late 2024 summarises some of the wider organisational and institutional problems with how the ONS approached this project.
A transformed Office for National Statistics?
The story of the TLFS is itself a symptom of a broader range of internal challenges at the ONS, as captured in the June 2025 Devereux Review of the organisation’s performance and culture. It summarised that: ‘… most of the well-publicised problems with core economic statistics are the consequence of ONS’s own performance. And that performance is affected by certain cultural issues.’
The review has brought forward a range of changes within the ONS, including the appointment of a number of new senior leaders, aimed at improving its production of economic statistics and its management of the TLFS project.
It remains early days in this new era, and it would be naïve to think that this will fix wider challenges in getting the public to engage with surveys. But it might ensure that the management of the process of producing economic statistics improves significantly.
One issue that has affected the quality of economic statistics in both the UK and the United States is the role that persistent underfunding of statistical agencies plays in undermining the quality of statistical production.
Some may argue that we cannot afford to run relatively expensive data collection exercises like the LFS and even the census. But recent experience with UK labour market statistics has demonstrated the central role that these data play in informing the work of the Bank of England, HM Treasury, the Office for Budget Responsibility (OBR) and many other government bodies.
We must properly resource a survey of the labour force if we want to have a full and deep understanding of developments in the labour market.
Where can I find out more?
- Transformed Labour Force Survey – A lessons learnt review: ONS report, December 2024.
- Independent Review by Sir Robert Devereux KCB: ONS report, June 2025.
- Why official UK data are less reliable and what this means: Oxford Economics research briefing, March 2025.
- Britain’s statistics scandal means it cannot answer its most pressing questions: August 2025 article in the Financial Times.
- Labor statistics: August 2025 survey of top US economists by Chicago Booth on whether BLS employment estimates are biased in favour of any particular political party.
- Royal Economic Society warns UK faces crisis in economic statistics: August 2026 press release from the professional association of UK research economists.
- UK gender pay gap understated for past two decades, report finds: August 2025 article in the Financial Times on long-running flaws in the Annual Survey of Hours and Earnings (ASHE).
Who are experts on this question?
- Stuart McIntyre
- Jonathan Wadsworth
- David Autor
- Alex Bryson
- Lindsey Macmillan
- Jonathan Portes