VoxEU Column Financial Markets

Model risk and the implications for risk management, macroprudential policy, and financial regulations

Risk forecasting is central to financial regulations, risk management, and macroprudential policy. This column raises concerns about the reliance on risk forecasting, since risk forecast models have high levels of model risk – especially when the models are needed the most, during crises. Policymakers should be wary of relying solely on such models. Formal model-risk analysis should be a part of the regulatory design process.

Risk forecasting is central to macroprudential policy, financial regulations, and the operations of financial institutions. Therefore, the accuracy of risk forecast models – model risk analysis – should be a key concern for the users of such models. Surprisingly, this does not appear to be the case. Both industry practice and regulatory guidance currently neglect the risk that the models themselves can pose, even though this problem has long been noted in the literature (see for example Hendricks 1996 and Berkowitz and O’Brien 2002). After all, the existing literature is sparse on the question of model risk, generally limiting itself to backtesting.

Our recent research (Danielsson et al. 2014) considers the accuracy of the most commonly accepted market risk forecast methodologies. We find that the model risk of existing market risk forecast methods is indeed substantial, especially during market turmoil and crises. These are presumably the times when accurate risk forecasts are most needed. This result applies equally to market risk management within financial institutions and to systemic risk forecast methods that depend on market data.

Model risk analysis

The most common way to address the veracity of risk forecast models is by backtesting – a somewhat informal way to evaluate model risk.

But backtesting is a poor way to capture model risk. It is highly dependent on assumptions about the statistical distribution of financial variables – which we do not have enough data to verify. Moreover, in practice it tends to be focused on simplistic criteria like the frequency of exceptions to quantile levels, rather than more complicated but potentially significant statistics like volatility clustering.

Crucially, backtesting doesn’t effectively compare models. A simple way of doing so, without getting into statistical quagmires or data traps, is to look at the level of disagreement amongst the candidate models. This new method we call risk ratios. This entails applying a range of common risk forecast methodologies to a particular asset on a given day, and then calculating the ratio of the maximum to the minimum risk forecasts.

This provides a succinct way of capturing model risk, because as long as the underlying models have passed some model evaluation criteria by the authorities and financial institutions, they can all be considered as reputable candidates for forecasting risk.

Supposing that a true number representing the latent level of risk exists, and this risk is forecast by a number of equally good models, the risk ratio should be close to one. When the risk ratio strongly differs from one, this captures the degree to which the various candidate models disagree, providing a succinct measure of model risk.

Application to market-risk models

We compare six commonly used risk forecast methodologies: a historical simulation, a moving average, an exponentially weighted moving average, two variants of the popular GARCH model, and an extreme value theory model. They can be wildly temperamental, particularly during financial crisis periods, with the risk ratios often exceeding 10 when using the Basel II market risk criterion, and much more under the proposed Basel III criterion.

This means that an analyst using two state-of-the-art risk forecast models, both having passed backtests, would find that one model estimates risk to be $1 while the other model gives risk as $10 for exactly the same portfolio, with no way of discriminating between the two.

Implication for market risk regulations

The observation that the risk forecast models perform well most of the time, but tend to fail during periods of turmoil and crises, is not necessarily all that important for the models’ original intended use – market risk management – because in that case the financial institution is concerned with managing day-to-day risk, rather than tail risk or systemic risk.

However, the high levels of model risk should be a concern for both practitioners and regulators. After all, the output of the risk forecast models is used as an input into expensive decisions, be they portfolio allocations or the amount of capital. Ultimately, it casts doubt on the advisability of relying too much on risk sensitivity in regulatory design.

Application to systemic risk models

A large class of systemic risk identification/forecast methods is elementally dependent on common market risk models and based on market data. Hence, the empirical results on the performance of the market risk models can be expected to apply equally to them.

We verify empirically that popular systemic risk models are indeed subject to similar model risk as market risk models, finding that the forecasts not only strongly depend on the underlying market risk model, but also that the model risk of the market risk measures appears to pass through to the systemic risk measures.

Consequently, the model risk of the market data based systemic risk models is especially high during crisis periods. It is a particular cause for concern that statistical methods explicitly designed for systemic risk analysis should especially suffer from model risk at times of systemic turmoil. In other words, market data based systemic risk identification/forecast methods appear most likely to fail when needed the most.

Reason for poor performance

There are two reasons why these regulatory approved models perform poorly.

  • First, financial crises are rare, and so turn up in the sample data too seldom.

This means strong assumptions need to be made about the stochastic processes governing market prices, which are most likely to fail when the economy transitions from a calm period to a crisis. One can use a particular crisis for calibration – often the crisis of 2008 – but a single observation is unlikely to perform well out-of-sample. That crises are ubiquitous does not mean they are homogenous.

  • Second, financial models tend to assume crises are the result of exogenous shocks – like an asteroid hitting the markets, as though participants are nothing to do with it.

This is perhaps their most glaring flaw. As argued by Danielsson et al. (2009), risk is in fact endogenous, created by the interaction between market participants and their desire to bypass risk control systems. As risk takers and regulators learn over time, the price dynamics change, further frustrating risk forecasting.

Conclusion

Statistical risk forecast methods increasingly underpin both financial regulations and the internal management of risk in financial institutions. Somewhat surprisingly, given the pivotal role assigned to risk forecast models, the accuracy of such models has been frequently challenged, with a prominent example being their failure to identify the buildup of risk prior to 2007.

Our formal research into the model risk of market risk models gives support for such criticism. Current state-of-the-art models are indeed subject to a significant degree of model risk. The model risk is especially high during periods of market turmoil and financial crises.

The high levels of model risk inherent in existing market risk forecast models leaves open the possibility that market risk might be systematically under- or over-forecast, in a non-detectable manner. This can lead to costly mistakes, such as inappropriate levels of risk taking and the miscalculation of risk-weighted capital.

The lesson for risk managers, regulators, and policymakers is that models need to be treated with skepticism. Formal analysis of model risk should be a part of the overall market risk regulatory design process.

Such an analysis should extend beyond the publication of the final version of regulations like Basel III, so that the fundamental market risk methodologies embedded in the regulatory process can be updated as the underlying risk forecast methodology improves. This should incorporate the choice of risk measures, probability levels, sample size, sample transformation, and tail event conditioning in addition to the underlying stochastic models.

Disclaimer: The views in this column are solely the responsibility of the authors and should not be interpreted as reflecting the views of the Board of Governors of the Federal Reserve System or of any other person associated with the Federal Reserve System.

Authors’ note: We thank the Economic and Social Research Council (UK) [grant number: ES/K002309/1].

References

Berkowitz, J and J O’Brien (2002), “How accurate are value-at-risk models at commercial banks?”, Journal of Finance, 57: 977–987.

Danielsson, J, K James, M Valenzuela, and I Zer (2014), “Model Risk of Risk Models”, Federal Reserve Board Finance and Economics Discussion Series, 2014-34. 

Danielsson, J, H S Shin, and J-P Zigrand (2009), “Modelling financial turmoil through endogenous risk”, VoxEU.org, 11 March. 

Hendricks, D (1996), “Evaluation of value-at-risk models using historical data”, Technical report, Federal Reserve Bank of New York Economic Policy Review, April.

4,409 Reads