The International Monetary Fund’s Global Financial Stability Report released last month estimates the potential losses of US-originated credit assets held by banks and others at $2.2 trillion, up from $1.4 trillion in October 2008 (IMF 2009). The Bank of England estimates that mark-to-market losses have more than doubled since the previous report, totalling US$2.8 trillion (Bank of England 2008). Why is asset quality apparently deteriorating so rapidly? What will the ultimate losses be?

The Basel regulatory capital requirements categorise commercial banks’ financial risks into trading and banking book exposures. First, we find that the losses reported to date may primarily be due to the fact that the trading book is marked to market. While the models used might work for individual firms, in the aggregate they lead to systemic risk. Therefore, much of the losses are attributable to the dynamics of negative market feedback.

Second, banking book exposures are marked to rating, to delinquency, and to default, depending on the sophistication of the model. The predominant use of through-the-cycle models for the banking book is a major source of vulnerability. Deficiencies in these models mean that mortgages and commercial and industrial loans might not be adequately reserved, and the losses on assets not marked to market may be understated.

Finally, the financial architecture is concentrated with regard to risk models and financial institutions. Valuations depend on a handful of risk management providers with proprietary data and a few risk management models. Financial regulators have delegated responsibility for risk management in complex derivative markets to invalidated models, creating a fragile situation that is subject to systemic risk.

Shock amplification

One obscure dimension of market dynamics is the collective use of inappropriate market information and flawed models. The president of the European Central Bank, Jean-Claude Trichet, articulated this problem very well. In his keynote address to the Federal Reserve Bank of Chicago conference on April 23, 2002, he noted,

"The impact of risk management techniques on market dynamics is particularly enlightening with regard to the question of asset price overshooting. Value-at-risk calculations have become a crucial element of the standard approach used by market participants to evaluate the risk inherent in their market activities and to set up exposure limits. In times of financial turmoil, the growing use of sophisticated risk management techniques by financial intermediaries might have had the paradoxical effect of amplifying the initial shock, exhausting liquidity, and contributing to contagion phenomena… When market players rely on converging risk evaluations, they tend to take the same decisions at the same time, thus amplifying the initial shock to prices and trading volumes." (Hunter et. al. 2003, emphasis added)

The downturn and volatility in the capital markets have dramatically increased the value-at-risk and thus the capital requirements for trading activities. Coupled with the contraction of capital-at-risk and reduction in leverage, the price of all risky assets may have dropped below their fundamental value. Therefore, leveraged markets are prone to overshoot in booms and underprice in downturns. If this is true, the current losses in the trading book may be overestimated. If policy does not respond well (or is ineffective), the pessimistic view becomes self-reinforcing. Under the present “fair market value” regulatory regime, market risk exposures are marked to market and a large fraction of the losses reported to date relate to market-risk exposures.

Troubles in modelling credit risk

A second set of problems relates to credit risk that is not marked to market. Under current regulations, credit risk exposures are marked to rating, delinquency, or default. Credit risk may be measured by either the through-the-cycle or the point-in-time approach (Rösch and Scheule 2005). The latter are forecasts that aim to predict the future level of losses, while the former are averages over past business cycles. Point-in-time models reflect reality much better and should form part of every sound risk measurement framework. The financial industry may be able to improve the predictive power of their models: 

  • Through-the-cycle approaches offer simplicity, while point-in-time models have been based on limited data. In other words, building a risk model based on the experience of multiple boom years may be inadequate to provision for credit losses in a downturn.
  • Regulators have accepted both through-the-cycle and point-in-time methods but often prefer through-the-cycle methods because they are more transparent and less cyclical. The result of using such an approach is obvious – the model overestimates the losses in an economic boom and grossly underestimates the losses in an economic downturn. There have been two major downturns over the past three decades: 19801-2 and 1990-1993, however there is nothing in models calibrated with data since 2000 to point to a downturn. As a result, through-the-cycle approaches did not predict the unexpected loss levels in senior CDO tranches. Clearly, we are now in the underprovisioned phase.
  • The credit rating agencies have propagated through-the-cycle models that exclude from consideration macroeconomic conditions and address fundamental idiosyncratic risk drivers.

A little understood problem is that the model provider, financial intermediation, and model auditing industry is highly concentrated, leading to systemic risk. Several examples suffice: the small number of credit rating agencies for bond and structured finance issues, the growing market share of “too big to fail” financial institutions, and joint ventures in model construction designed to reduce costs. The problem is compounded by the use of similar quantitative frameworks and frameworks that are calibrated based on similar loss experiences.

An anecdote illustrates this point. Some years ago the chief risk officer of a major U.S. bank presented the asset correlation matrix used by his institution. Another major financial institution at the event confirmed its use of the same matrix. While the institutions were fundamentally different in nature, they shared the same reputable consulting firm. Neither this firm’s model nor any other model has been formally validated. The oligopolistic structure was nurtured by the limited data availability and the propensity of financial institutions to outsource risk modelling. A similar situation prevails in the accounting industry, which is dominated by the “Big Four.” The public sector has abdicated too much authority to vested interests in the private sector.

Possible improvements

The following suggestions are intended to support the formulation of a better framework of global financial stability:

  • Existing regulations need to be revised. Morris Goldstein from the Peterson Institute for International Economics recently recommended suspending both the implementation of the internal ratings-based approach in Basel II and the use of credit ratings until better solutions are found (Goldstein 2008). In contrast, we believe that capital requirements are fundamental to the stability of financial systems but that the existing rules may need to be updated in relation to market risk exposures (and off-balance-sheet activities) and the use of appropriately specified point-in-time measurement of credit risks.
  • The recent spate of credit rating revisions suggests that the rating models are fundamentally flawed. Risk models should be examined, and only those that are deemed valid should be authorised. The resources of regulators are limited, and regulations should encourage model providers and financial institutions to publish their models so that external experts may assist in the validation process.
  • Deconcentration of risk models is another priority. This may involve generating a compulsory global warehouse for financial risk–related data (particularly regarding credit risk) and encouraging alternative modelling techniques. While limited data-sharing initiatives reportedly are being undertaken in Japan, they need to be far more extensive and systematic.
  • Regulatory arbitrage has transferred risks to off-balance-sheet special-purpose vehicles and hedge funds. This practice may have to be limited by homogenising rules across financial instruments and institutions as well as across industries and countries.
  • Finally, the structure of incentives in the financial industry, as well as in auxiliary support services, will have to be reformed and aligned with public interests.
References

Bank of England, 2008, “Executive Summary and Introduction,” Financial Stability Report 24 (October 2008): 5–6.

International Monetary Fund, 2009, ”Global Financial Stability Report”, GFSR Market Update, January 28

William Curt Hunter, George G. Kaufman, and Michael Pomerleano, eds., 2003, Asset Price Bubbles: The Implications for Monetary, Regulatory, and International Policies (Cambridge: MIT Press)

Rösch, D. and Harald Scheule, 2005, “A Multifactor Approach for Systematic Default and Recovery Risk,” Journal of Fixed Income 15:2: 63–75

Goldstein, Morris 2008, “The Global Financial Crisis,” National Economists Club, December 18

 

210 Reads