The impact of machine learning and AI on the UK economy

David Bholat 02 July 2020

a

A

Cybersecurity. Climate change. And now coronavirus. 

Some of the most critical issues impacting humans today concern our interdependence with the non-human environment. 

A recent virtual event addressed another such issue: the potential impact machines, imbued with artificial intelligence, may have on the economy and the financial system. The event was organised by the Bank of England, in collaboration with CEPR and the Brevan Howard Centre for Financial Analysis at Imperial College. What follows is a summary of some of the recorded presentations. The full catalogue of videos are available on the Bank of England’s website

The history and future of AI

In his presentation, Stuart Russell (University of California, Berkeley), author of the leading textbook on artificial intelligence (AI), gives a broad historical overview of the field since its emergence in the 1950s, followed by insight into more recent developments. The current AI epoch, beginning around 2010, has been based on the marriage of large datasets and powerful computers with algorithmic advances, notably deep learning. In deep learning, the relationship between an outcome variable (say, a financial crisis) is learned not directly from predictors but from multiple nested functions that weigh and transform predictors (for example, into interaction terms and polynomials). The best functional form is learned from the data, often in highly opaque yet surprisingly effective ways. In part because of deep learning, computers now often exceed humans on tasks such as object recognition ─ important, for example, in the early detection of cancers.  

Looking to the future, Russell notes a number of issues on the horizon. Like many technologies, AI could be a mixed blessing. In the near term, we are likely to see a number of improved consumer goods, including highly intelligent computer assistants with advanced information extraction and question-answer capabilities. At the same time, AI-generated deep fakes and lethal autonomous weapons could further undermine social trust and destabilise political order. Longer-term, we need to avoid sleepwalking into superintelligence (Bostrom 2014), where AI has an absolute advantage over all forms of human intelligence. The risk is not so much that AI will pursue its own objectives, but that it will pursue its humanly defined objectives too single-mindedly. Instead, we need AI which is cognizant of the fact that human preferences are multiple, changing, and often difficult to articulate ex ante. 

On the technical front, Russell envisions future advances will progress by combining deep learning with Bayesian reasoning to better leverage prior expert knowledge. Provocatively, he suggests Big Data could turn out to be the new ‘snake oil’, a play on the common phrase. As algorithms improve, far less data will be needed for machines to learn, analogous to the way humans can often learn to recognise instances of an object after seeing just a single example. 

Watch Stuart Russell below

‘Fourth Industrial Revolution’: Fact or fiction?

While some commentators claim that we are in the midst of a fourth Industrial Revolution, others, who refer to slow growth and low productivity gains for over a decade across advanced economies, claim that we are in an era of secular stagnation. How do we square this circle? 

This question is at the heart of presentations organised under the title “The Fourth Industrial Revolution: Fact or Fiction?”. As an economic historian, Nick Crafts (University of Sussex) offers a unique, long-run perspective. According to Crafts, prior industrial revolutions have been premised on revolutions in the legal, organisational, political and technical conditions necessary for generating and disseminating new ideas. The first Industrial Revolution was arguably made possible by greater freedom of speech, the spread of the printing press and the postal service, as well as the establishment of learned societies. The second Industrial Revolution involved the routinisation of research and development within corporations, the professionalisation of the sciences at universities, and the beginnings of telecommunication. The third Industrial Revolution had information technologies at its core. 

Similarly, AI potentially represents the “invention of a new method of invention” (Cockburn et al. 2019). This is because AI can search for new patterns and combinations in high-dimensional spaces. Recently, for example, scientists at MIT discovered a new antibiotic by computationally combining chemical compounds at a scale unfeasible through physical experimentation.1

However, Crafts cautions against placing too much stock in AI on its own to deliver productivity improvements and economic growth. Like the great man theory of history, the great invention interpretation of economic history is wanting; in prior industrial revolutions, great inventions like the steam engine did not account for the majority of total-factor productivity growth. In brief, technologies like AI are endogenous developments rather than exogenous shocks. The most direct way to catalyse economic growth and AI may be to address the market power of incumbent firms and promote greater economic competition. 

Watch Nick Crafts below

Labour market and real economy implications

Employment effects are perhaps the single most frequently speculated about topic when it comes to AI. In particular, many worry that widespread automation will lead to permanently higher levels of unemployment. These concerns are not new. Indeed they have recurred at major turning points in the history of capitalism. For instance, Marx (1867) famously speculated that the ‘forces of production’ (capital goods) would become so powerfully productive that the dominant ‘relations of production’ (employment) would be rendered anachronistic, with displaced and impoverished workers agitating for a new mode of production and distribution. 

The ‘relations of production,’ however, have proven much stickier than Marx imagined, at least in the richest countries. As Alan Manning (London School of Economics) explains, this is in part because job losses in specific sectors have historically been counter-balanced by broad-based gains in aggregate real income as new technologies create higher quality and lower priced goods and services. Higher disposable incomes boost demand for new products, which in turn boosts labour demand in those sectors. Overall, second-round gains dominate first-round losses. Looking at our current situation, Manning observes that some of the direst predictions about the impact of automation on employment during the past decade have not come to pass. Still, he cautions against complacency, arguing that there is a strong case for public policy to suture the gap between the supply of digital skills and their demand, and to lessen unequal income distribution.

Watch Alan Manning below 

The impact of machine learning on financial services

One sector where AI is making inroads is financial services. To the point, a recent survey of UK financial firms revealed that the majority of respondents report they are already using machine learning (Bank of England and FCA 2019). Furthermore, there is keen industry interest in automating various operational aspects of financial services. These applications hold the promise of delivering efficiencies in a sector where some studies have suggested unit costs have remained stable for very long periods of time (Philippon 2015).

Yet these efficiency gains may not immediately translate into increased profitability. On the contrary, the cost curve for firms making substantial investments in AI, and their supporting ecosystem, could be upward-sloping, at least in the short run. And if these strategic change projects fail, this amplifies the operational risks financial firms run (Bank for International Settlements 2018). In her presentation, Louise Herring (QuantumBlack) shares her insights into how firms can ensure a successful analytics transformation, drawing on her experience as a consultant for financial services companies where AI has been successfully embedded.

Watch Louise Herring below

New competitive dynamics created by machine learning and AI

While many applications of AI in financial services may be beneficial, others may be less benign. In his presentation, Giacomo Calzolari (European University Institute and CEPR) details one of the potential downsides. 

Trading algorithms in financial markets have been around for decades. Historically, these algorithms operated according to a set of pre-specified rules ─ for example, to sell if the price of a security fell below a certain threshold. More recently, advances in reinforcement learning have enabled more parsimonious algorithms that are simply given the objective to maximise profits, and then learn over time how best to do so on the basis of observing data on bid-ask prices. As these types of algorithms become more common in financial markets, computational agents are essentially learning how other computational agents behave in order to account for this behaviour when optimising their own.   

Calzolari reports what happens in a computationally simulated experiment when one agent is programmed to randomly undercut it competitors. The other agent responds by cutting its own price further. Ultimately, both agents earn lower profits, and over time they learn that it is best for their own profitability to set prices that are closely aligned with one another. This form of autonomously learned tacit collusion could become a key challenge for policymakers aiming to ensure financial markets are fair and effective.

Watch Giacomo Calzolari below 

Ethical and consumer conduct issues raised by AI

Imperfect competition induced by AI would hurt all consumers. But it is also possible that AI will benefit some consumers, while harming others. These potential distributional consequences are explored by Tarun Ramadorai (Imperial College and CEPR). 

Ramadorai’s research uses US mortgage data to explore what happens when defaults are predicted using a traditional (logit) model versus a machine learning model (random forest). Variables in both models include borrower income and the loan-to-value of the property purchased. The race of the borrower is not used to train either model but is used ex-post to evaluate model decisions. Ramadorai reports that the random forest makes the predicted probabilities of default of black and Hispanic borrowers more pronounced. As machine learning models are adopted by firms, financial conduct regulators will need to monitor if they remedy or reinforce historical inequities in the price and quantity of credit allocated to different segments of the borrowing public.  

Watch Tarun Ramadorai below 

Machine learning, AI and financial stability

What about financial stability? As elsewhere, there is a healthy debate about the direction and magnitude of the effects of AI on financial stability. Some see machine learning as promoting financial stability by improving the predictive accuracy of firms’ risk models. At the same time, Jon Danielsson (London School of Economics) worries that if firms adopt the same best performing models trained on similar data, then this introduces a new set of positive cross-correlations in the financial system. If the system experiences a negative shock, then firms could behave the same way through sell-offs and hence cause a systemic crisis.

For Danielsson, the case for machine learning and AI in setting macro-prudential policy is particularly unclear. So far, AI has been most successful in games like Go where the rules are fixed, and where the permissible moves by agents are extremely numerous but still finite. By contrast, the real economy is dynamic and full of ‘unknown unknowns.’ Danielsson believes a computational macro-prudential agent ─ which he nicknames BoB (the Bank of England Bot) ─ would perform poorly in situations without analogue in its training data. By contrast, humans may be more agile under radical uncertainty. In other words, we might draw a distinction between thinking inside the (black] box and thinking outside the (black] box. Over time, AI may be able to learn the best way to calibrate existing policy instruments. But what AI is unlikely to devise is a set of new policy instruments in response to a crisis. And creating new policy instruments and frameworks is part of the art of central banking, as demonstrated during the Global Crisis when new regimes like resolutions were developed.

Watch Jon Danielsson below

Applying machine learning and AI in central banking and regulation

Whatever their limitations, machine learning models are still powerful additions to the central banking toolkit. They often outperform conventional models in prediction tasks because they can detect nonlinear patterns in data, and handle datasets with a large number of predictors. 

Gareth Ramsay (Bank of England) explains some of the ways the UK central bank is already using machine learning to support its objectives to maintain monetary and financial stability. For example, recent research conducted at the Bank related to its price stability mandate uses sentiment indicators derived from newspaper text to forecast macroeconomic variables like inflation, output and employment. Machine learning models generally outperform standard statistical ones (Kalamara et al. 2020). Turning to research related to the Bank’s financial stability objective, recent Bank research has used machine learning methods to predict the probability of financial crises two years in advance. If policymakers aimed to predict 80% of financial crises (limiting the false positive rate to 20%), then the best performing machine learning model reduces the error rate by 40% compared to a standard linear model (Bluwstein et al. 2020). 

Watch Gareth Ramsay below

References

Bank of England and FCA (2019), “Machine learning in UK financial services”.

Bank for International Settlements (2018), “Implications of fintech developments for banks and bank supervisors”.

Bluwstein, K, M Buckmann, A Joseph, M Kang, S Kapadia, and Ö Simsek (2020), “Credit growth, the yield curve and financial crisis prediction: evidence from a machine learning approach,” Bank of England Staff Working Paper 848.

Bostrom, N (2014), Superintelligence: Paths, dangers, strategies, Oxford: Oxford University Press. 

Cockburn, I, R Henderson, and S Stern (2019), “The impact of artificial intelligence on innovation,” in Agrawal, A, J Gans, and A Goldfarb (eds), The economics of Artificial Intelligence: an agenda, Chicago: University of Chicago Press. 

Kalamara, E, A Turrell, C Redl, G Kapetanios, and S Kapadia (2020), “Making Text Count”, Bank of England Staff Working Paper 865.

Marx, K (1977[1867]), Capital Volume One, B Fowkes translator, Vintage. 

Philippon, T (2015), “Has the US finance industry become less efficient? On the theory and measurement of financial intermediation”, The American Economic Review 105(4): 1408-38.

Endnotes

1 "Powerful antibiotics discovered using machine learning for first time", The Guardian, 20 February 2020.

a

A

Topics:  Financial markets Labour markets Productivity and Innovation

Tags:  machine learning, data science, central banking, UK economy

Senior Manager, Bank of England

Events

CEPR Policy Research