VoxEU Column Macroeconomic policy

The unfortunate uselessness of most ’state of the art’ academic monetary economics

Standard macroeconomic theory did not help foresee the crisis, nor has it helped understand it or craft solutions. This columns argues that both the New Classical and New Keynesian complete markets macroeconomic theories not only did not allow the key questions about insolvency and illiquidity to be answered. They did not allow such questions to be asked. A new paradigm is needed.

The Monetary Policy Committee of the Bank of England I was privileged to be a ‘founder’ external member of during the years 1997-2000 contained, like its successor vintages of external and executive members, quite a strong representation of academic economists and other professional economists with serious technical training and backgrounds. This turned out to be a severe handicap when the central bank had to switch gears and change from being an inflation-targeting central bank under conditions of orderly financial markets to a financial stability-oriented central bank under conditions of widespread market illiquidity and funding illiquidity. Indeed, the typical graduate macroeconomics and monetary economics training received at Anglo-American universities during the past 30 years or so, may have set back by decades serious investigations of aggregate economic behaviour and economic policy-relevant understanding. It was a privately and socially costly waste of time and other resources.

Most mainstream macroeconomic theoretical innovations since the 1970s (the New Classical rational expectations revolution associated with such names as Robert E. Lucas Jr., Edward Prescott, Thomas Sargent, Robert Barro etc, and the New Keynesian theorizing of Michael Woodford and many others) have turned out to be self-referential, inward-looking distractions at best. Research tended to be motivated by the internal logic, intellectual sunk capital and aesthetic puzzles of established research programmes rather than by a powerful desire to understand how the economy works - let alone how the economy works during times of stress and financial instability. So the economics profession was caught unprepared when the crisis struck.

Complete markets

The most influential New Classical and New Keynesian theorists all worked in what economists call a ‘complete markets paradigm’. In a world where there are markets for contingent claims trading that span all possible states of nature (all possible contingencies and outcomes), and in which intertemporal budget constraints are always satisfied by assumption, default, bankruptcy and insolvency are impossible. As a result, illiquidity - both funding illiquidity and market illiquidity - are also impossible, unless the guilt-ridden economic theorist imposes some unnatural (given the structure of the models he is working with), arbitrary friction(s), that made something called ‘money’ more liquid than everything else, but for no good reason. The irony of modelling liquidity by imposing money as a constraint on trade was lost on the profession.

Both the New Classical and New Keynesian complete markets macroeconomic theories not only did not allow questions about insolvency and illiquidity to be answered. They did not allow such questions to be asked.

It is clear that, when searching for an appropriate simplification to address the intractable mess of modern market economies, the starting point of ‘no markets’, that is, autarky or no trade, is a much better one than that of ‘complete markets’.   Goods and services that are potentially tradable are indexed by time, place and state of nature or state of the world. Time is a continuous variable, meaning that for complete markets along the time dimension alone, there would have to be rather more markets for future delivery (infinitely many in any time interval, no matter how small) than you can shake a stick at. Location likewise is a continuous variable in a 3-dimensional space. Again rather too many markets. Add uncertainty (states of nature or states of the world), never mind private or asymmetric information, and ‘too many potential markets’, if I may ruin the wonderful quote from Amadeus attributed to Emperor Joseph II, comes to mind. If any market takes a finite amount of resources (however small) to function, complete markets would exhaust the resources of the universe.

Beyond this simple ‘impossibility of complete markets’ proposition, there is the deeper point, that the assumption of complete markets in most of the New Classical and New Keynesian macroeconomics assumes away the problem of contract enforcement. This problem is especially acute in trade over time or intertemporal trade, where the net value to each party to a contract of fulfilling the terms of the contract varies over time and can change sign. In a world with selfish, rational, opportunistic agents, able and willing to lie and deceive, only a small set of voluntary transactions will ever be observed, relative to the universe of all potentially feasible transactions.

The first set of voluntary exchange-based transactions we are likely to see are self-enforcing contracts - those based on long-term relationships, repeated interactions and trust. There are some of those, but not too many. The second are those voluntarily-entered-into contracts that are not self-enforcing (say because interactions between the same sets of agents are infrequent and market participants have a degree of anonymity that prevents the use of reputation as a self-enforcement mechanism) but are instead enforced by some external agent or third party, often the state, sometimes the Mafia (sometimes it’s hard to tell who is who). Third party enforcement of contracts is again often complex and costly, which is why it covers relatively few contracts. It requires that the terms of the contract and the contingencies it contains be third-party observable and verifiable. Again, only a limited set of exchanges can be supported this way.

The conclusion, boys and girls, should be that trade - voluntary exchange - is the exception rather than the rule and that markets are inherently and hopelessly incomplete. Live with it and start from that fact. The benchmark is no trade - pre-Friday Robinson Crusoe autarky. For every good, service or financial instrument that plays a role in your ‘model of the world’, you should explain why a market for it exists - why it is traded at all. Perhaps we shall get somewhere this time.

The Auctioneer at the end of time

In both the New Classical and New Keynesian approaches to monetary theory (and to aggregative macroeconomics in general), the strongest version of the efficient markets hypothesis (EMH) was maintained. This is the hypothesis that asset prices aggregate and fully reflect all relevant fundamental information, and thus provide the proper signals for resource allocation. Even during the seventies, eighties, nineties and noughties before 2007, the manifest failure of the EMH in many key asset markets was obvious to virtually all those whose cognitive abilities had not been warped by a modern Anglo-American Ph.D. education.   But most of the profession continued to swallow the EMH hook, line and sinker, although there were influential advocates of reason throughout, including James Tobin, Robert Shiller, George Akerlof, Hyman Minsky, Joseph Stiglitz and behaviourist approaches to finance. The influence of the heterodox approaches from within macroeconomics and from other fields of economics on mainstream macroeconomics - the New Classical and New Keynesian approaches - was, however, strictly limited.

In financial markets, and in asset markets, real and financial, in general, today’s asset price depends on the view market participants take of the likely future behaviour of asset prices. If today’s asset price depends on today’s anticipation of tomorrow’s price, and tomorrow’s price likewise depends on tomorrow’s expectation of the price the day after tomorrow, etc. ad nauseam, it is clear that today’s asset price depends in part on today’s anticipation of asset prices arbitrarily far into the future. Since there is no obvious finite terminal date for the universe (few macroeconomists study cosmology in their spare time), most economic models with rational asset pricing imply that today’s price depend in part on today’s anticipation of the asset price in the infinitely remote future.

What can we say about the terminal behaviour of asset price expectations? The tools and techniques of dynamic mathematical optimisation imply that, when a mathematical programmer computes an optimal programme for some constrained dynamic optimisation problem he is trying to solve, it is a requirement of optimality that the influence of the infinitely distant future on the programmer’s criterion function today be zero.

And then a small miracle happens. An optimality criterion from a mathematical dynamic optimisation approach is transplanted, lock, stock and barrel to the behaviour of long-term price expectations in a decentralised market economy. In the mathematical programming exercise it is clear where the terminal boundary condition in question comes from. The terminal boundary condition that the influence of the infinitely distant future on asset prices today vanishes, is a ‘transversality condition’ that is part of the necessary and sufficient conditions for an optimum. But in a decentralised market economy there is no mathematical programmer imposing the terminal boundary conditions to make sure everything will be all right.

The common practice of solving a dynamic general equilibrium model of a(n) (often competitive) market economy by solving an associated programming problem, that is, an optimisation problem, is evidence of the fatal confusion in the minds of much of the economics profession between shadow prices and market prices and between transversality conditions that are an integral part of the solution to an optimisation problem and the long-term expectations that characterise the behaviour of decentralised asset markets. The efficient markets hypothesis assumes that there is a friendly auctioneer at the end of time - a God-like father figure - who makes sure that nothing untoward happens with long-term price expectations or (in a complete markets model) with the present discounted value of terminal asset stocks or financial wealth.

What this shows, not for the first time, is that models of the economy that incorporate the EMH - and this includes the complete markets core of the New Classical and New Keynesian macroeconomics - are not models of decentralised market economies, but models of a centrally planned economy.

The friendly auctioneer at the end of time, who ensures that the right terminal boundary conditions are imposed to preclude, for instance, rational speculative bubbles, is none other than the omniscient, omnipotent and benevolent central planner. No wonder modern macroeconomics is in such bad shape. The EMH is surely the most notable empirical fatality of the financial crisis. By implication, the complete markets macroeconomics of Lucas, Woodford et. al. is the most prominent theoretical fatality. The future surely belongs to behavioural approaches relying on empirical studies on how market participants learn, form views about the future and change these views in response to changes in their environment, peer group effects etc. Confusing the equilibrium of a decentralised market economy, competitive or otherwise, with the outcome of a mathematical programming exercise should no longer be acceptable.

So, no Oikomenia, there is no pot of gold at the end of the rainbow, and no Auctioneer at the end of time.

Linearise and trivialise

If one were to hold one’s nose and agree to play with the New Classical or New Keynesian complete markets toolkit, it would soon become clear that any potentially policy-relevant model would be highly non-linear, and that the interaction of these non-linearities and uncertainty makes for deep conceptual and technical problems. Macroeconomists are brave, but not that brave. So they took these non-linear stochastic dynamic general equilibrium models into the basement and beat them with a rubber hose until they behaved. This was achieved by completely stripping the model of its non-linearities and by achieving the transubstantiation of complex convolutions of random variables and non-linear mappings into well-behaved additive stochastic disturbances.

Those of us who have marvelled at the non-linear feedback loops between asset prices in illiquid markets and the funding illiquidity of financial institutions exposed to these asset prices through mark-to-market accounting, margin requirements, calls for additional collateral etc. will appreciate what is lost by this castration of the macroeconomic models. Threshold effects, critical mass, tipping points, non-linear accelerators - they are all out of the window. Those of us who worry about endogenous uncertainty arising from the interactions of boundedly rational market participants cannot but scratch our heads at the insistence of the mainline models that all uncertainty is exogenous and additive.

Technically, the non-linear stochastic dynamic models were linearised (often log-linearised) at a deterministic (non-stochastic) steady state. The analysis was further restricted by only considering forms of randomness that would become trivially small in the neighbourhood of the deterministic steady state. Linear models with additive random shocks we can handle - almost!

Even this was not quite enough to get going, however. As pointed out earlier, models with forward-looking (rational) expectations of asset prices will be driven not just by conventional, engineering-type dynamic processes where the past drives the present and the future, but also in part by past and present anticipations of the future. When you linearise a model, and shock it with additive random disturbances, an unfortunate by-product is that the resulting linearised model behaves either in a very strongly stabilising fashion or in a relentlessly explosive manner. There is no ‘bounded instability’ in such models. The dynamic stochastic general equilibrium (DSGE) crowd saw that the economy had not exploded without bound in the past, and concluded from this that it made sense to rule out, in the linearised model, the explosive solution trajectories. What they were left with was something that, following an exogenous random disturbance, would return to the deterministic steady state pretty smartly. No L-shaped recessions. No processes of cumulative causation and bounded but persistent decline or expansion. Just nice V-shaped recessions.

There actually are approaches to economics that treat non-linearities seriously. Much of this work is numerical - analytical results of a policy-relevant nature are few and far between - but at least it attempts to address the problems as they are, rather than as we would like them lest we be asked to venture outside the range of issued we can address with the existing toolkit.

The practice of removing all non-linearities and most of the interesting aspects of uncertainty from the models that were then let loose on actual numerical policy analysis was a major step backwards. I trust it has been relegated to the dustbin of history by now in those central banks that matter.

Conclusion

Charles Goodhart, who was fortunate enough not to encounter complete markets macroeconomics and monetary economics during his impressionable, formative years, but only after he had acquired some intellectual immunity, once said of the Dynamic Stochastic General Equilibrium approach which for a while was the staple of central banks’ internal modelling: “It excludes everything I am interested in”. He was right. It excludes everything relevant to the pursuit of financial stability.

The Bank of England in 2007 faced the onset of the credit crunch with too much Robert Lucas, Michael Woodford and Robert Merton in its intellectual cupboard. A drastic but chaotic re-education took place and is continuing.

I believe that the Bank has by now shed the conventional wisdom of the typical macroeconomics training of the past few decades. In its place is an intellectual potpourri of factoids, partial theories, empirical regularities without firm theoretical foundations, hunches, intuitions and half-developed insights. It is not much, but knowing that you know nothing is the beginning of wisdom.

Editors’ Note: This was first posted on Buiter’s blog Maverecon on 3 February 2009.

6,404 Reads