AI for international economists: Explosive growth in processing speed, Part 2 of 5

Richard Baldwin 11 December 2018




"It's amazing. You must try it". Can you remember when you discovered email? Or first heard about Facebook, or used WhatsApp for the first time? This sort of thing is happening to globalisation. 

In my last blog, I argued that the key trends in digitech can be summarised in four laws on the growth of computing power (Moore), the growth of data transmission (Gilder), and the growth in the usefulness of networks (Metcalfe and Varian). 

Taken together, these laws are transforming the world of work. They are bringing automation and globalisation to the service sector at a pace that is faster than policymakers and citizens realise. This is why it is important to understand them. 

The first – and by far the most famous – law that makes this explosive change possible was formulated by Gordon Moore. 

Moore’s Law

The semiconductor industry is filled with quirky misfits, but Moore – a co-founder of Intel, and so one of the pioneers who put the silicon into Silicon Valley – is not that type of character (see box). He'd agree that the Law is more important than the man. (Note: there are a number of different versions of it – even Moore himself altered it a couple of times. But the exact wording doesn’t matter as much as its implications).

Moore’s Law means that computers are constantly improving – and will continue to do so for the foreseeable future (which is five to ten years in this business). The most common formulation states that computing power doubles every 18 months. 

This growth rate has been sustained for about 50 years. The semiconductor industry’s latest flagship report predicts that it will continue until 2021 at least. 

Gordon Moore 

For a man with more than $6 billion, the Presidential Medal of Freedom, and a famous law to his name, Gordon Moore was a slow starter. He was an indifferent student in high school and spent two years at the unglamorous San Jose State University before transferring to Berkeley, where he became the first member of his family to graduate from university. He earned a Chemistry PhD at CalTech in just four years and soon started working on semiconductors under the guidance of William Shockley, the inventor of the transistor. 

Things did not go well at Shockley Semiconductor. Shockley was a difficult man to work for and his behaviour became increasingly erratic and autocratic after he won the 1956 Nobel Prize in Physics. Moore and seven other young researchers left to form their own company (Shockley called them the 'traitorous eight' and predicted they would fail). Each chipped in seed capital of $500, and with backing from Fairchild Camera and Instrument they started Fairchild Semiconductor Corporation in 1957. Moore was the R&D director. 

Moore published his famous Law in 1965 in an out-of-the-way magazine called Electronics. Here’s how he put it: “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year…. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.” 

Even in this article, published more than half a century ago, Moore saw the future clearly. He knew better than most that integrated circuits – what we call computer chips – would revolutionise electronics by making computation faster and cheaper. He predicted that: 

“Integrated circuits will lead to such wonders as home computers, … automatic controls for automobiles, and personal portable communications equipment [and the] electronic wristwatch. … Integrated circuits will also switch telephone circuits and perform data processing.” 

In Understanding Moore's Law, David Brock points out that these claims sounded ridiculous in 1965. Computers were mostly room-sized mainframes, and IBM’s model cost more than $1 million in today’s dollars. Even smaller minicomputers were as expensive as houses. But Moore knew what exponential growth would do to costs and capacities, and he knew the physics and chemistry that would drive it. He understood technology and had the guts to draw what now seem to be obvious conclusions. 

Speculative to spectacular 

This rapid progress turns the unlikely into the ubiquitous. The first webcams, for example, were expensive and bulky. Thanks to Moore’s Law, they are now tiny and so cheap that they are standard in every laptop, tablet, and mobile phone. 

Moore’s Law is disruptive. It has helped make instant machine translation of spoken languages a reality. It makes machine learning commonplace, which in turn means firms can automate many tasks in the service sector. And it will help create the communications technology that will open service sector firms to global competition and global opportunities. The exponential growth in processing power is one of the key drivers of the globotics upheaval. 

It’s not really a Law

Moore’s Law is not a law of nature. You can even argue that it’s not even about technology. We can think of it as a commitment strategy for the electronics and software industries. 

Companies invest millions of dollars to develop breakthrough software and telecommunication services that work on computer chips that don’t exist yet. Likewise, chipmakers invest hundreds of millions to design better chips in anticipation of the demand that will flow from next year's innovative software and telecommunication services.

Moore’s Law is a self-fulfilling prophecy. Chipmakers fulfil the promise of the Law by throwing increasing amounts of cash at the problem. It now takes 17 times more research hours to double processing speeds than it did in 1971. The sums at play are enormous. 

Nvidia, a specialist chipmaker, has spent more than $2 billion on research to develop a new chip that speeds up machine learning. Even in this world, that's a big bet. It would, for instance, pay half the price of a US Navy Nimitz Class nuclear aircraft carrier. And all this for a chip that makes machine learning 12 times faster. That only makes sense if the demand for faster chips is growing like mad.

This ‘Moore’s Law as coordinator’ thinking is formalised in the International Roadmap for Devices and Systems, formerly the International Technology Roadmap for Semiconductors, compiled by Paolo Gargini, a former executive at Intel, since 1991. The report, which aggregates information from industry associations in Europe, Japan, Taiwan and South Korea, is intended to coordinate expectations among the hundreds of manufacturers and suppliers. 

But will Moore’s Law go on forever? Obviously not.

Moore’s Law meets the Law of Diminishing Returns

Cramming more power on to a tiny computer chip is not easy. There are real questions as to whether computer chips can continue to double their power. "Moore's Law really is dead this time", claimed Peter Bright of Ars Technica in November 2016, "[it] has died at the age of 51 after an extended illness.” 

Six months later, Intel chief executive Brian Krzanich claimed that reports of the Law's death had been exaggerated:

“I’ve been in this industry for 34 years, and I’ve heard the death of Moore’s Law more times than anything else in my career. And I’m here today to really show you and tell you that Moore’s Law is alive and well and flourishing. I believe Moore’s Law will be alive well beyond my career, alive and well and kicking.”

But things can only get so small. The number of transistors per square inch has doubled approximately every 18 months since Richard Nixon was president. Can this go on forever? The answer must be no. 

The transistors in today’s microprocessors are about 14 nanometres wide. Bacteria are between 10,000 and 100,000 nanometres. The average virus is 100 nanometres, and individual atoms are a tenth of a nanometre. At that scale, the world becomes strange – quantum strange. 

In his 2015 report, Paolo Gargini wrote that, “even with super-aggressive efforts, we'll get to the 2–3 nanometre limit, where features are just 10 atoms across.” At that scale, electron behaviour is governed by quantum uncertainties that would make transistors hopelessly unreliable. Gargini guesses that this limit will be reached in the 2020s. 

Even this physical limit need not stop computers getting faster, cheaper and smaller. There are more ways to boost computer power than the ‘more Moore’ route. Gargini points out that engineers are evolving from 2D to 3D chips. In the not-too-distant future, the physical limits may be repealed using quantum computing. These computers draw on the wonderful properties of quantum physics, and their quantum bits can be in many states at the same time. 

Another way forward is what Gargini called ‘more than Moore’. This means making chips that are optimised for a specific task rather than general computing. The 'more Moore' route is like training one athlete to win gold medals in every sport. The new approach is to train different athletes, some for sprinting and others for weightlifting. Nvidia's $2 billion chip is a good example of the ‘more than Moore’ strategy. It is specifically designed for machine learning, teaching computers to recognise patterns as in faces or speech. 

Using the cloud to relax constraints

There are other ways to get around physical limits on squeezing more transistors on to a chip. A common one is to substitute transmission for local processing. This is what iPhones do with Siri, the digital assistant.

Siri works best connected to the internet because iPhones don’t yet have the computing power to handle the job. Your voice data is compressed, sent to Apple’s computers in the Cloud and processed for understanding, which is matched to an answer, which is then whizzed back to your iPhone for Siri to deliver. And all in microseconds. 

This brings us to the next law – Gilder’s Law – which argues that transmission capacity grows three times faster than processing power. That’s what my next post is about.

Read the next blog in the series here.

Read the previous blog in the series here.



Topics:  Productivity and Innovation

Tags:  Moore's Law, processing power

Professor of International Economics at The Graduate Institute, Geneva; Founder & Editor-in-Chief of; exPresident of CEPR


Vox eBooks

CEPR Policy Research