VoxEU Column Frontiers of economic research

Predicting the effects of Brexit: Forecasting in economics and finance

Policymakers use forecasting to attempt to assess the impact of major events, such as the recent Brexit vote, on the economy. While forecasting has improved dramatically in recent years, the models can still be greatly improved. This column discusses some of the limitations of forecasting models, and how policymakers can make their predictions more reliable. Key considerations are using more data to generate predictions, and using myriad models to eliminate individual misspecifications. 

What will be the likely future effects of the Brexit vote on the UK’s economic growth, current account, and budget balance? Answers to these questions can in part be provided using tools from economic forecasting. In fact, prior to the referendum on the UK’s membership of the EU, institutions such as the UK Treasury and the IMF produced projections showing large, negative consequences for the UK economy of a ‘leave’ vote. Foreign exchange and stock markets clearly concurred with these projections, as both experienced massive falls in the two days following the vote, particularly in the case of many UK financial institutions.

While the effect of the Brexit vote on the UK economy will take time to play out and ultimately may never be fully known – we will not observe the UK economy’s performance under the alternative ‘remain’ vote scenario – projections of how the UK economy would fare under a ‘leave’ vote were ultimately based on economic forecasting models that allowed policymakers to quantify its economic impact. This raises several questions, such as how economic forecasts are generated, what type of information they incorporate, on which assumptions they are based, and, ultimately, how seriously we should take economic forecasts.

In fact, methods for selecting a forecasting model, estimating its parameters, communicating the resulting forecasts, and evaluating their precision have improved in many important ways over the past 20 years. Still, economic forecasting models remain far from perfect, and understanding their limitations is important when interpreting economic forecasts.

Most fundamentally, forecasting models that are simple enough to lend themselves to empirical estimation and use in economic policymaking must be strongly condensed representations of a far more complex – and possibly changing – data generating process (DGP). Therefore the most fruitful perspective is to regard all forecasting models as being mis-specified as opposed to representing the true DGP.

This point has important implications. Just as a financial manager must closely monitor portfolio risk, economic forecasters must monitor and be aware of the resulting model ‘risk’ that depends on the degree of model misspecification, which itself is unknown because the true DGP is not known. When forecasting models break down – a recent example being the Global Crisis of 2007-09 – their accuracy can be expected to be much lower than under more normal conditions, and their resulting forecasts are likely to be surrounded by far greater uncertainty. In turn, if discovered in real time (Croushore and Stark 2001), such a ‘model breakdown’ should lead policymakers to act less aggressively on such forecasts, and seek robust methods for predicting future outcomes or implement policy decisions that are less dependent on having access to accurate forecasts.

A key point that we emphasise in our recent paper is that there is almost never a single forecasting approach that uniformly dominates all other available forecasting methods (Elliott and Timmermann 2016). This helps explain why, in practice, some methods seem to work better for certain types of variables (e.g. persistent variables such as price inflation or wages) than for other variables (e.g. real economic growth), which represent very different types of DGPs.

These points lead to another key observation, namely, the importance of considering a suite of alternative forecasting methods at any point in time. In the same way that a portfolio manager holds multiple assets to benefit from diversification across risky assets, forecasters can benefit from combining forecasts from different methods, each of which is mis-specified in its own manner. Diversification effects can be expected to be important whenever the individual methods’ forecast errors are not too strongly correlated. Model or forecast combination has been used to deal with a variety of issues ranging from model instability – some models may be more robust to large macroeconomic shocks than others – to the effect of uncertainty about the best forecasting model, the presence of parameter estimation error, or the pooling of information from different surveys or data sources. Faced with several forecasts with similar forecasting performance, it makes sense to combine forecasts rather than arbitrarily attempt to identify a single best model.

A final point we emphasise is that forecast evaluation and model comparison is an important part of the forecasting process, which can nowadays be conducted with a higher level of rigour than in earlier years. Historically, it was common practice to report estimates of different forecasting methods' risk – typically, sample averages – and, possibly, use an informal ranking to compare the risk of different forecasting methods, in all cases without accompanying standard errors.

In the last 20 years, a large literature has developed on methods that allow us to rigorously evaluate and compare different models' forecasting performance. For example, the Federal Reserve or the IMF may be interested in knowing if their forecasts are as accurate – or even better than – private sector forecasts from industries or the financial sector. Finding that this is not the case would suggest the need to improve the forecasting approach. Additionally, forecast comparisons can be undertaken even on very large sets of models and can be used to control for the effects of data mining arising from the search for a superior prediction model across multiple specifications (Hansen et al. 2011).

How should economic forecasts be communicated to the public? Economic forecasters are becoming vastly better at generating timely forecasts. Long gone is the time when a single aggregate forecast of the state of the economy was based on the quarterly release of GDP figures. In its place now are sophisticated methods for aggregating economic time-series observed at different frequencies – daily (interest rates and stock prices), weekly (payroll figures) and monthly (consumer prices and industrial production). This type of data can be summarised using dynamic factor models so as to give a daily update of the current state of the economy – a so-called ‘nowcast’ (see Banbura et al. 2011) – as is done already by the Federal Reserve Bank of Philadelphia, using the methodology of Aruoba et al. (2009).

Access to large datasets is also having a major impact on other areas of economic forecasting. Most notably, economic forecasters nowadays have access to thousands of potential predictor variables, and the search for an appropriate model specification in this high-dimensional space has suddenly become a lot more challenging. Using methods from statistical learning, forecasters are just beginning to scratch the surface in this area. Concerns about ‘data mining’ and ‘over-fitting’ become far more important in a setting with more flexible classes of estimators (forecasting methods) and more predictor variables to search over.

While the cross-section of possible predictors has become much greater over time, in other regards we are stuck with only a limited set of observations. There are only so many post-war recessions we can use to construct a business cycle forecasting model. When looking at time-series data, often larger samples means sampling the data at a higher frequency, and longer datasets often mean that estimates are constructed from data far in the past and of less relevance for forecasting today.

The longer the window of historical data used to estimate a model, the more model instability is likely to matter. Stock and Watson (1996) found that instability was present for a majority of time-series forecasting models fitted to a range of macroeconomic variables. Clements and Hendry (1998) also rate model instability as a key influence on the performance of macroeconomic forecasting models.

In the presence of such model instability, a number of practical issues arise. First, can we detect the presence – and, perhaps, timing and nature – of such model instability? Second, how should forecasts be constructed when there is time variation in the DGP? Unfortunately, tests for particular types of model instability need not be suggestive of the exact nature of the instability, e.g. many small breaks as opposed to few large breaks in that model being the correct representation of the DGP.

On a more positive note, many break tests are useful in detecting whether or not there is model instability in the first place – they just cannot identify the exact form of this instability. At a minimum, evidence of model instability should reduce the forecaster’s faith in her forecasting approach and, perhaps, trigger an open minded search for alternative forecasting approaches.

Point forecasts – forecasts of the single outcome that is ‘most likely’ – are fast being supplemented with other, richer ways of communicating the degree of faith in a forecast. Density forecasts provide a full summary of forecast uncertainty, which is invaluable in many situations.

In practice therefore, public agencies have moved towards providing density forecasts. For example, the Bank of England reports a 'fan chart' forecast for inflation, as does the IMF in their World Economic Outlook publication. Fan charts use different shades of colours to illustrate bands of quantiles, starting from the median forecast and fanning out towards covering an increasingly likely range of outcomes for variables such as the inflation rate measured over increasing forecast horizons. Such fan charts will tend to be thinner during periods with stable economic growth and wider, signalling greater uncertainty, during times of financial and economic turmoil. The close link between underlying economic uncertainty and survey forecasts (Rossi et al. 2016) is a fascinating area for future research.

References

Aruoba, S B, F X Diebold, and C Scotti (2009), “Real-Time Measurement of Business Conditions”, Journal of Business & Economic Statistics, 27, 417-27

Banbura, M D, D Giannone, and L Reichlin (2011), “Nowcasting”, ch. 7 in M P Clements and D F Hendry (eds.) The Oxford Handbook of Economic Forecasting, 193-224, Oxford University Press

Clements, M P, and D F Hendry (1998), “Forecasting economic processes", International Journal of Forecasting, 14, 111-31

Croushore, D, and T Stark (2001), “A real-time data set for macroeconomists”, Journal of Econometrics 105, 111-30

Elliott, G, and A Timmermann (2016), “Forecasting in Economics and Finance”, Annual Review of Economics, forthcoming

Hansen, P R, A Lunde, and J M Nason (2011), “The model confidence set”, Econometrica, 79, 453-97

Rossi, B, T Skehposyan, and M Soupre (2016), “Understanding the sources of macroeconomic uncertainty”, working paper, Pompeu Fabre

Stock, J H, and M W Watson (1996), “Evidence on structural instability in macroeconomic time series relations”, Journal of Business & Economic Statistics, 14, 11-30

2,310 Reads