Artificial Intelligence and the long-term future of humanity
with Prof. Stuart Russell
Saturday September 15, 2018 at 11:00 AM
100 Genetics and Plant Biology, UC Berkeley
The news media in recent years have been full of dire warnings about the risk that AI poses to the human race, coming from well-known figures such as Stephen Hawking and Elon Musk. Should we be concerned? If so, what can we do about it? While some in the mainstream AI community dismiss these concerns, Professor Russell will argue instead that a fundamental reorientation of the field is required to avoid the existential risks that AI might otherwise create. Other risks, such as progressive enfeeblement, seem harder to address.
Stuart Russell received his B.A. with first-class honours in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He has served as an Adjunct Professor of Neurological Surgery at UC San Francisco and as Vice-Chair of the World Economic Forum’s Council on AI and Robotics. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book “Artificial Intelligence: A Modern Approach” (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in over 1300 universities in 118 countries. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and philosophical foundations. He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.