Questions and answers about coronavirus and the UK economy
Questions and answers about coronavirus and the UK economy

Can textual analysis be used to track the economy during the pandemic?

Newspaper articles and social media posts often feature discussion of economic conditions and economic indicators. During the coronavirus crisis, can textual analysis be used to turn these qualitative data into a numerical measure to help us to understand the ‘real-time’ health of the economy?

The unparalleled speed and global nature of the Covid-19 crisis is forcing economic policy-makers – governments and central bankers – to formulate emergency policy responses in a timeframe of weeks and sometimes days. To develop appropriate policies, they require timely estimates of the state of the economy. But conventional economic data in the UK, including the indices that cover services (such as Hotels and Restaurants) or production (such as Manufacturing), are published with a substantial delay. For example, the first post-lockdown estimate of services for the month of April will not arrive until mid-June, around six weeks of delay.

To address this gap, policy-makers use estimates that are more readily available than official statistics. A growing area emphasises the use of words – such as newspaper articles or social media posts – to inform these estimates. How successful are such attempts?

What is nowcasting?

Most people have heard about forecasting – in the macroeconomic policy context, it is trying to predict how the economy will develop. To help policy-makers with the lack of current information, economists estimate the current state of the economy before official data are released and this is referred to as ‘nowcasting’. To do this, they use indicator variables that contain information that is available at a higher frequency. Prominent examples are surveys of consumer or business attitudes, credit measures and asset prices.

Inserting these additional variables, as well as conventional macroeconomic data, into a suite of statistical models (surveyed in Banbura et al, 2013) allows an early estimate of the current state of the economy. While it obviously varies in each specific case, it is hard to beat the ability of such models to track the current state of the economy. This is reflected in their widespread adoption among central banks across the world.

Can text help to improve nowcasting?

Text, in all its forms from print to digital, discusses the same underlying economic data and indicators, often giving important context to any numbers reported. Text can also help to measure important concepts that are not otherwise measured well, or at all. Recently, economists have asked whether textual analysis can turn these qualitative data into a numerical measure to help to understand the health of the economy.

Textual analysis is a huge topic, advancing at breathtaking rate. Methods range from simple counts of words to sophisticated models trained on petabytes (1m gigabytes) of data. What follows is a selective review of research in which textual analysis has been helpful in the social sciences, before focusing on where it has been used to develop a better understanding the current or future state of the economy.

Starting from the idea that people allocate more attention to the most important issues facing them, the proportion of text dedicated to a particular subject should carry important information about the importance of that subject.

Perhaps the best-known example of such an approach is the construction of economic policy uncertainty indices (Baker et al, 2016). Uncertainty about economic policy is not otherwise measured officially. It is constructed by counting the proportion of newspaper articles containing words relating to economics, policy and uncertainty. The authors then show that, at both the macro and micro level, an increase in economic policy uncertainty leads to a reduction in investment and employment.

Topic classification models do a similar task but in an unsupervised manner; that is, without prior specification on what the topics are.

  • One study uses transcripts from the Federal Open Market Committee (FOMC), the US central bank, to extract the fraction of time each member talks about different topics (Hansen et al, 2018). The authors find that an increase in transparency leads policy-makers to consider a wider range of topics, with increased emphasis on data, during their discussions about economic conditions. This is interpreted as evidence of increased information gathering.
  • Another study uses similar techniques to show that many news topics are relevant for predicting household inflation expectations (Larsen et al, 2020).
  • And another uses a topic model on leading economics articles to show that economists responded to the 2008/09 global financial crisis by switching their focus to studying the crisis. Preliminary observations of current developments suggest that economists’ response to the Covid-19 crisis is similar (Levy et al, 2020).

Dictionary-based methods – counts of positive and negative words from a predefined dictionary – successfully capture sentiment about underlying media.

  • One study shows that sentiment from newspaper articles can predict excess stock returns one day ahead. But the predictive power of sentiment vanishes at a time horizon longer than one week (Tetlock, 2007).
  • There are numerous examples where sentiment follows patterns that you would expect. For example, one study quantifies sentiment in annual reports and shows that chief executives bury bad news in the middle of them, where sentiment follows a U-shaped pattern (Boudt and Thewissen, 2019).
  • Another study applies dictionary-based sentiment methods to assess the state of the Swiss economy during the Covid-19 crisis (Burri and Kaufmann, 2020). Combining news sentiment with financial market data, the authors construct a daily indicator of economic activity that is available with a one-day delay. They show that it is highly correlated with macroeconomic and survey indicators of Swiss economic activity.

Combining dictionary-based methods to extract sentiment and topic models for subject emphasis makes it possible to develop variables from text that provide sufficient information to be competitive with the best nowcasting and forecasting models. One study derives topic-specific sentiment series from the text in Norway’s leading business newspaper Thorsrud (2018). The author feeds these series into a nowcasting model to produce a daily measure of the state of the Norwegian economy that can compete with the central bank’s leading statistical models.

Different media outlets talk about different things and have different editorial tone – or sentiment – as measured by dictionary-based methods. One study shows that although different newspapers report different topics during normal times, at times of stress there is some homogenisation – they tend to report similar topics (Nimark and Pitschner, 2019). Controlling for the difference in tone across media outlets and article type (whether or not it is editorial comment), another study shows that during the Covid-19 crisis, news sentiment provides early information on the economy compared with the traditional early estimates, such as flash survey data (Buckman et al, 2020).

Recent work shows that supervised machine learning (SML) methods with textual input data can successfully classify sentiment and predict economic variables. SML techniques recover a mapping between linked input and output data – often referred to as labelled data, for example, film reviews and associated ratings – such that subsequent predictions of output variables are possible with only the input data:

  • Rambaccussing and Kwiatkowski (2020) label 1,600 articles’ sentiment manually and use a SML model to classify sentiment in the remaining 393,000 articles in a UK-based news dataset. Using the sentiment classification on the full dataset, they show that this provides additional forecasting information for output and unemployment but not for inflation.
  • Kalamara et al (2020) use a UK-based news dataset to take counts of 8,800 economically relevant words across articles as the input data for their SML method for predictions of several economic variables. They find that their SML method reliably outperforms competing models’ predictive accuracy across a number of economic variables, including during times of stress, notably the 2008/09 global financial crisis.

Various other sources of text (for example, search data from Google) provide information that improves forecast performance. Following earlier work on forecasting flu with search data, two studies show that forecasts of retail sales, unemployment and tourism are all improved by incorporating relevant search data (Choi and Varian, 2012; Scott and Varian, 2014). But recent work shows that by only including data that are available at the time of making the forecast, referred to as ‘real-time data’, search data only retains its value for unemployment (Niesert et al, 2019).

Textual data from social media also contain information that improves predictions. One study shows that certain dimensions of mood extracted from Twitter feeds can improve forecasts of daily stock price movements (Bollen et al, 2011). Another uses anonymised Facebook data to show that users whose geographically distant friends experienced larger recent house price gains are more likely to switch to owning from renting and buy larger houses (Bailey et al, 2018).

How reliable is the evidence?

A large body of peer-reviewed evidence shows that textual analysis contains information for various prediction tasks across a broad range of fields from finance, accountancy and political economy to central bank communication. While still nascent, nowcasting and forecasting using textual data can successfully improve predictions. But some challenges remain.

This body of research is still young and there is scope for validating findings across media sources, countries and methods. A detailed survey of some of the myriad of decisions in moving from high dimensional text to lower dimension sentiment measurement is provided by Algaba et al (2020). To improve the reliability of the evidence, the concluding plea of this study for more effort on the reproducibility of sentiment quantification bears repeating.

Recent research that acknowledges some of these decisions may be important shows that by including several sentiment measures, time aggregation methods and topics while allowing the data to select the best combination of these, forecasts of US industrial production growth at a horizon of just under a year are improved (Ardia et al, 2019).

Methods for quantifying text are context-specific. Lists of words that may have a positive sentiment in one context can have negative in another. For example, liability, tax and cost need not necessarily be negative in a financial context. One study shows that by updating Tetlock’s dictionary to reflect the finance-specific nature of the body of text, it is possible to derive better forecasts of stock returns (Loughran and McDonald, 2011). Advances continue to be made in this direction, for example, research showing that by modelling the joint behaviour of both text and returns a sentiment measure can be constructed at the level of the firm (Ke et al, 2019).

Where can I find out more?

News sentiment in the time of covid-19: Shelby Buckman and colleagues examine how their daily news sentiment indicator reacts during the Covid-19 crisis.

Academic scholarship in light of the 2008 financial crisis: textual analysis of NBER working papers: Daniel Levy and colleagues look at how economists focus responded to the 2008 financial crisis.

Will big data keep its promise? A 2018 speech by Andrew Haldane of the Bank of England gives a broad overview of how new forms of big data can be of use to economists.

Media reporting is good predictor of household inflation expectations: A high-level summary of an study by Vegard Larsen and colleagues exploring how media influences household inflation expectations.

What’s in the news? Text-based confidence indices and growth forecasts: An overview of some work in progress at the Bank of England exploring textual sentiment indicators for nowcasting.

Who are UK experts on this question?

Author: Craig Thamotheram, NIESR

Update history:

14/7/20: An additional paragraph was added on supervised machine learning.

Recent Questions
View all articles
Do you have a question surrounding any of these topics? Or are you an economist and have an answer?
Ask a Question
OR
Submit Evidence