As the Hollywood blockbuster Transcendence debuts this weekend with Johnny Depp, Morgan Freeman and clashing visions for the future of humanity, it's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake ever.
Artificial intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.
The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history.
Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, for example, world militaries are considering autonomous weapon systems that can choose and eliminate their own targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasized by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation.
Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it may play out differently than in the movie: as Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence." One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here -- we'll leave the lights on"? Probably not -- but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us -- not only scientists, industrialists and generals -- should ask ourselves what can we do now to improve the chances of reaping the benefits and avoiding the risks.
______________________
Stephen Hawking is Director of Research at the Centre for Theoretical Physics at Cambridge and a 2012 Fundamental Physics Prize laureate for his work on quantum gravity. Stuart Russell is a computer science professor at Berkeley and co-author of "Artificial Intelligence: a Modern Approach." Max Tegmark is a physics professor at M.I.T. and the author of "Our Mathematical Universe." Frank Wilczek is a physics professor at M.I.T. and a 2004 Nobel laureate for his work on the strong nuclear force.
Artificial intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.
The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history.
Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, for example, world militaries are considering autonomous weapon systems that can choose and eliminate their own targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasized by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation.
Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it may play out differently than in the movie: as Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence." One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here -- we'll leave the lights on"? Probably not -- but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us -- not only scientists, industrialists and generals -- should ask ourselves what can we do now to improve the chances of reaping the benefits and avoiding the risks.
Stephen Hawking is Director of Research at the Centre for Theoretical Physics at Cambridge and a 2012 Fundamental Physics Prize laureate for his work on quantum gravity. Stuart Russell is a computer science professor at Berkeley and co-author of "Artificial Intelligence: a Modern Approach." Max Tegmark is a physics professor at M.I.T. and the author of "Our Mathematical Universe." Frank Wilczek is a physics professor at M.I.T. and a 2004 Nobel laureate for his work on the strong nuclear force.