Questions and answers about
the economy.

#economicsfest: What has coronavirus taught us about radical uncertainty?

Covid-19 has revealed the importance of access to key information when faced with an economic and public health shock of such magnitude. Understanding the role of uncertainty and the limitations of theory is critical in designing policies to navigate the challenges ahead.

Even before the arrival of Covid-19 in early 2020, the usefulness of models and forecasts for the purposes of policy-making has been widely debated by economists. The failure of policy-makers, financial experts and economic researchers to foresee the global financial crisis of 2008/09 and the subsequent recession undermined public confidence in economic thinking and raised many questions for the discipline.

The impact of the pandemic has brought this issue into sharper focus by demonstrating that the challenges of uncertainty for forecasting extend beyond the ‘dismal science’ into to all scientific research. Policy-making seems to be vulnerable to a wider problem: an over-reliance on quantifiable predictions.

With these issues in mind, I attended this year’s Bristol Festival of Economics online event ‘Radical Uncertainty: Decision-Making for an Unknowable Future’, with John Kay and Mervyn King, hosted by Lizzy Burden.

Event recording available here.

Ultimately, the problem is not simply that economists – and scientists more generally – are using inaccurate models or are somehow failing in their methodologies. The problem also rests in the expectations of influential decision-makers, and society more broadly, that complex issues should be quantifiably predictable and straightforward to forecast.

Should uncertainty be quantified?

Risk and uncertainty are two very different concepts. Risk is the quantifiable probability of conceivable outcomes, whereas uncertainty is when the likelihood of outcomes is largely unknown and unquantifiable. This distinction is subtle but important, and it has suffered from a gradual drift towards conflation within economics and other disciplines.

Within economics, there is a tendency to try to assign probabilities to every variable and outcome, and this has created a general propensity to ‘over-quantify’. Indeed, much of modern financial economics and macroeconomics is based on this over-quantification, with parameters that are unknown (and, in some, cases unknowable) being allocated values based on ‘reasonable assumptions’. This has led to overconfidence in economists’ ability to know the values of what were traditionally known as ‘fuzzy’ parameters, as well as an assumption that accurate predictions can be made about anything.

Over-reliance on quantification of uncertainties can have serious real-world consequences, as the Covid-19 pandemic has shown. Ever since the virus first emerged and started to spread, governments (and the expert panels they rely on for policy information) have worked to create predictions about the progress of the pandemic based on parameters that were not known from the beginning. In some cases, these metrics remain unknown.

Related question: Risk in the time of Covid-19: what do we know and not know?

In many instances, governments used data to try to plot the future course of the outbreaks in their countries before crucial information about the length of the incubation period, the timeline of infectiousness and the rate of contagion were known. What Covid-19 revealed was just how important having access to key information is when faced with an economic and public health shock of such magnitude.

Can economists model the real world accurately?

Central to this problem of quantifying uncertainty and imposing probabilities is an over-reliance on modelling for policy. The dominance of modelling in itself is not a bad thing for the policy world. Models help economists to understand what the data are telling them, and how different parts of a system interact. Simple models can often help decision-makers to determine what does and does not matter in the context of understanding relationships within data.

For some, problems emerge when the models begin to take over the process. When policy-makers see a conflict between the theoretical model and the empirical data, there is a risk that they will ‘decide it is the world that is wrong’, rather than revisiting the assumptions within the model to discern what may be missing.

This occurred in the wake of the 2008/09 global financial crisis, as a conflict emerged between those who wanted to broaden the economic modelling toolkit and those that resisted. Because of this model-centric view of the real world, there is a tendency for some researchers to disregard the idea that there are other moving pieces – pieces that are frequently abstracted away from in their models. Uncertainty is itself a damaging force economically, and models that fail to factor it in when informing decisions do little to address the problem.

Related question: Why is uncertainty so damaging for the economy?

In economics, and policy-making more generally, models are tools that form a part of a larger repertoire to be chosen from for specific purposes. As John Kay and Mervyn King highlighted, the model should fit the question. Each model provides different insights into different questions, and economists should make use of a variety of models to generate well-rounded answers. For example, normal measures of inflation may not be applicable in a situation like lockdown, where governments ask citizens to ‘stay at home’ to curb the spread of the virus.

Related question: How can we measure consumer price inflation in a lockdown?

But this still leaves the question of making policy. Models provide insight into relationships between data, but they are not necessarily good predictors of policy. Frequently, the decisions that confront policy-makers are unique, never previously encountered, and requiring novel approaches and solutions.

Economists, and the policy-makers they inform, are therefore under pressure to develop flexible approaches and solutions. This is where models can be at their most useful – in discerning strategies from complex data. By informing researchers of the relationships between elements of the data and helping them work through the information milieu, models can help economists to work through these problems, allowing them to develop solutions and policy recommendations.

How useful are forecasts?

A key message from the Festival event was that embracing uncertainty does not mean that we should no longer make predictions; either does embracing uncertainty suggest that we must accept an infinite number of outcomes. Rather, it is about accepting the limitations of the ability to predict. Economists can be aware that something is likely to occur without making hard probabilistic predictions of its occurrence in a given year. This knowledge of likelihood is still a useful tool in informing policy.

As John Kay and Mervyn King pointed out, the Covid-19 pandemic provides an example of this. Public health specialists and epidemiologists knew that a global pandemic could develop from a previously unknown virus at some point. This information allowed them to prepare and plan for contingency measures. But knowing the precise year, or even the probability of it happening in a given year, was unnecessary for this preparation.

There remains a concern that the fixation on the numbers of cases, deaths and infection projections has clouded the ability of policy-makers to discuss which parameters are important for these projections. In the face of controlling the spread of coronavirus, discerning the key factors that would influence the future trajectory of these outcomes should, according to some commentators, be at the centre of policy discussions.

Whether it is in the context of projections of future growth, discussion of interest rate forecasts or even the potential effects of climate change, acknowledging the limitations of forecasts and projections is an important caveat in any conclusion. A crucial feature of economic and other scientific forecasts is not the probability of a given outcome, but the investigation into the roots of the projection and the questions associated with it.

Embracing this range of possibilities and looking deeper into what influences the trajectories within that range could also help economists and policy-makers to move beyond the constraints that they impose on themselves by seeking to quantify the unquantifiable. The fundamental point, as repeated so often by John Kay and Mervyn King during the discussion, is understanding rather than knowing: ‘what’s going on here?’

Where can I find out more?

  • Radical Uncertainty: Decision-making for an unknowable future: This book by John Kay and Mervyn King draws on biography, history, mathematics, economics and philosophy to highlight the most successful – and most short-sighted – methods of dealing with an unknowable future.
  • ‘Economic uncertainty before and during the Covid-19 pandemic’: Bank of England working paper, evaluating several economic uncertainty indicators for the US and UK before and during the Covid-19 pandemic.
  • Economic uncertainty in the wake of the COVID-19 pandemic: VoxEU column examining the effectiveness of measures of economic uncertainty derived from statistical models.
  • Your Covid-19 Risk: A tool, developed by a large team of experts from all over the world, to help estimate an individual’s risk of contracting and spreading COVID-19.

Who are experts on this question?

Author: Conrad Copeland, University of Bristol
Recent Questions
View all articles
Do you have a question surrounding any of these topics? Or are you an economist and have an answer?
Ask a Question
OR
Submit Evidence