AdobeStock_223715541.jpeg
VoxEU Column Financial Regulation and Banking Monetary Policy Productivity and Innovation

Artificial intelligence as a central banker

Artificial intelligence, such as the Bank of England Bot, is set to take over an increasing number of central bank functions. This column argues that the increased use of AI in central banking will bring significant cost and efficiency benefits, but also raise important concerns that are so far unresolved.

Artificial intelligence (AI) is increasingly useful for central banks. While it may be used only in low-level roles today, technological advances and cost savings will likely embed AI deeper and deeper into core central bank functions. Maybe each central bank will have their own AI engine, maybe a future ‘BoB’ (the Bank of England Bot).

What will be the impact of BoB and its counterparts?

BoB could today, or soon, help with many central bank tasks, such as information gathering, data analysis, forecasting, risk management, financial supervision, and monetary policy analysis. 

The technology is mostly here; what prevents adoption are cultural, political, and legal factors. Perhaps most important is institutional inertia. However, the considerable cost savings from the use of AI will likely overcome most objections.

In some areas of central banking, BoB will be particularly valuable, such as in crisis response. If the central bank is facing a liquidity crisis and has hours or days to respond, speed of information gathering and analysis is critical. Having a standby AI engine that is expert in situational assessment is invaluable, freeing the human decision-makers from data crunching so they can make timely decisions, advised by BoB.

At the same time, the increased use of AI raises critical questions for economic policymakers (Agrawal 2018). Within central banking, four questions are particularly important (Danielsson et al. 2020).

1. Procyclicality

BoB, when used for financial supervision, will favour homogeneous best-of-breed methodologies and standardised processes, imposing an increasingly homogeneous view of the world on market participants.

That amplifies the procyclical impact of the banks’ own AI engines, which all have the same objective: profit maximisation subject to constraints. ‘Better’ solutions are closer to the optimum and, consequently, closer to each other. 

The consequence is crowded trades because of crowded perception and action. When new information arrives, all AI engines, in both the private and public sectors, will update their models similarly. All will see risk in the same way, and the banks’ AI will want to buy/sell the same assets.

The result is procyclicality - short-term stability and high profits but at the cost of increased systemic risk.

2. Unknown-unknowns 

One of the most challenging parts of the central bankers’ job is dealing with ‘unknown-unknowns’. Vulnerabilities, especially of the dangerous systemic type, tend to emerge on the boundaries of areas of responsibilities - the silos. Subprime mortgages put products with hidden liquidity guarantees into structured credit, crossing multiple jurisdictions, agencies, institutional categories, and countries. These are areas where humans and AI alike are least likely to look.

Current AI can easily be trained on events that have happened (‘known-knowns’). BoB can perhaps be trained on simulated scenarios (‘known-unknowns’).

However, our financial system is, for all practical purposes, infinitely complex. Not only that, but every action taken by the authorities and the private sector changes the system - the complexity of the financial system is endogenous, a consequence of Goodhart’s (1975) law: “[a]ny observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes”. 

BoB will, by definition, miss the unknown-unknowns, just like its human counterparts. After all, the machine can only be trained on events that either have already happened or are generated from simulations of fully-specified model economies. For exactly the same reason, it will be really good at dealing with known-unknowns, much better than its human counterparts. Yet, it is the unknown-unknowns that cause crises.

3. Trust

BoB will likely keep the financial system safe most of the time, most likely much better than a purely human-staffed supervisory system, and we will consequently increasingly rely on and trust BoB. That can both undermine contingency planning and preventative regulatory measures, while also creating a false sense of security that may well culminate in a Minsky moment: perceived low risk that causes crises (Danielsson et al. 2018).

In the 1980s, an AI engine called EURISKO used a cute trick to defeat all its human competitors in a naval war game. It simply sank its own slowest ships so that its naval convoy became faster and more manoeuvrable than the competitors’, ensuring victory (for a list of similar examples, see Krakovna 2018).

This example crystallises the trust problems facing BoB. How do we know it will do the right thing? A human admiral doesn’t have to be told they can’t sink their own ships. They just know; it’s an ingrained part of their humanity. BoB has no humanity. If it is to act autonomously, humans will have to first fix its objectives. But a machine with fixed objectives, let loose on an infinitely complex environment, will have unexpected behaviour (Russel 2019). BoB will run into cases where it takes critical decisions in a way that no human would. Humans can adjust their objectives. BoB cannot. 

How then can we trust BoB? Not in the same way we trust a human. Trust in human decision-making comes from a shared understanding of values and a shared understanding of the environment. BoB has no values, only objectives. And its understanding of the environment will not necessarily be intelligible to humans. Sure, we can run hypotheticals past BoB and observe its decisions, but we cannot easily ask for an explanation (Joseph 2019).

Does this mean we will only use AI in central banks for simple functions? Unlikely. Trust creeps up on us. We suspect most people would have baulked at managing their personal finances online 20 years ago. Five years ago, most would not have trusted self-driving cars. Today, we have no problem entrusting our lives to AI-flown aircraft and AI-controlled surgical robots. 

As AI proves its value to central banks, they will start trusting it, facilitating BoB’s career progression. After all, it will be seen as doing a great job much more cheaply than human central bankers.

Then, if a crisis happens, and we see that the AI is doing something unacceptable – perhaps the central bank version of sinking its own slowest ships – we might want to hit the kill switch. Except it will no longer be that simple. Its growing reputation will have reduced incentives for other contingency measures, and, thereby, our ability to interfere with Bob’s behaviour. Switching it off might endanger critical systems. 

4. Optimise against the system

The final area of concern is how BoB would deal with ‘malicious actors’: those taking unacceptably high risk, those creating instability to profit, or even those whose main objective is to damage the financial system.

Here, BoB is at a disadvantage to its opponents’ AI engines. It faces what is, in effect, an infinitely complex computational problem, as it has to monitor and control the entire system.

The opponent only has to identify local loopholes that can be exploited, and so will always have the advantage. 

This advantage is amplified by AI’s intrinsic rationality. Its objectives drive its actions. It makes BoB predictable, giving its adversaries an edge. Fixed objectives paired with a complex environment create unpredictable behaviour, regardless of whether we use AI or not. Now, however, rational behaviour within a well-defined environment allows for reverse-engineering of BoB’s objectives via repeated interactions. 

Certainly, countermeasures already exist. The standard defence is for AI to react randomly in interactions with human beings or other AI, limiting their ability to game it. This mimics the natural defence provided by humans - they create randomness, nuance, and interpretation, which vary across individuals and time.

There are at least two reasons why such countermeasures would not work in practice for BoB.

First, randomised responses would have to be programmed into the central bank AI, which would be unacceptable except in special cases. Regulations and supervision need to be transparent and fair. 

Second, randomisation requires the AI designers to specify a distribution for BoB’s actions, and this distribution can be reverse-engineered as regulated entities observe repeated decisions.

BoB’s innate rationality coupled with demands for transparency and fair play put it at a disadvantage when it is used for supervision. Interestingly, constructive ambiguity is accepted in monetary policy, so may be less of a problem there.

Conclusion

AI will become very helpful to central banks. Microprudential AI - ‘micro BoB’ - will reduce costs and increase efficiency, help with crisis response, and be highly successful on 999 days out of 1,000. 

AI will be similarly beneficial to monetary policy, handling data collection and policy forecasting, and improving the information flow to the monetary policy committees at a much lower cost than with current infrastructure.

It is in macroprudential regulations and crisis management where AI raises the most critical questions. BoB will increase procyclicality and the likelihood of Minsky moments, and cannot be trusted. It will facilitate optimisation against the system. 

Macro BoB needs a kill switch, but probably will not get one.

The use of AI for control purposes can increase systemic risk, reducing volatility and fattening the tails.

Authors’ note: Any opinions and conclusions expressed herein are those of the authors and do not necessarily represent the views of the Bank of Canada. We thank the Economic and Social Research Council (UK) [grant number ES/K002309/1] and the Engineering and Physical Sciences Research Council (UK) [grant number EP/P031730/1] for their support.

References

Agrawal, J G, and A Goldfarb (2018), “Economic policy for artificial intelligence”, VoxEU.org, 8 August.

Chakraborty, C, and A Joseph (2017), “New machines for the Old Lady”, BankUnderground.co.uk, 10 November. 

Danielsson, J, R Macrae and A Uthemann (2019), “Artificial intelligence and systemic risk”, SSRN.

Danielsson, J, M Valenzuela and I Zer (2018), “Low risk as a predictor of financial crises”, VoxEU.org, 26 March.

Joseph, A (2019), “Opening the machine learning black box”, BankUnderground.co.uk, 24 May.

Krakovna, V (2018), “Specification gaming examples in AI”, blog post, 2 April. 

Russel, S (2019), Human compatible, London: Allen Lane. 

2,205 Reads