Joshua Gans, 10 June 2018

Philosophers have speculated that an AI tasked with a task such as creating paperclips might cause an apocalypse by learning to divert ever-increasing resources to the task, and then learning how to resist our attempts to turn it off. But this column argues that, to do this, the paperclip-making AI would need to create another AI that could acquire power both over humans and over itself, and so it would self-regulate to prevent this outcome. Humans who create AIs with the goal of acquiring power may be a greater existential threat.

Events

  • 17 - 18 August 2019 / Peking University, Beijing / Chinese University of Hong Kong – Tsinghua University Joint Research Center for Chinese Economy, the Institute for Emerging Market Studies at Hong Kong University of Science and Technology, the Guanghua School of Management at Peking University, the Stanford Center on Global Poverty and Development at Stanford University, the School of Economics and Management at Tsinghua University, BREAD, NBER and CEPR
  • 19 - 20 August 2019 / Vienna, Palais Coburg / WU Research Institute for Capital Markets (ISK)
  • 29 - 30 August 2019 / Galatina, Italy /
  • 4 - 5 September 2019 / Roma Eventi, Congress Center, Pontificia Università Gregoriana Piazza della Pilotta, 4, Rome, Italy / European Center of Sustainable Development , CIT University
  • 9 - 14 September 2019 / Guildford, Surrey, UK / The University of Surrey

CEPR Policy Research