The existential risk from artificial general intelligence is the hypothetical threat that dramatic progress in artificial intelligence (AI) could someday result in human extinction (or some other unrecoverable global catastrophe).
The argument for the existence of the threat is that the human race currently dominates other species because the human brain has some distinctive capabilities that the brains of other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then this new superintelligence could become powerful and difficult to control.
By way of example, just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.
The severity of different AI risk scenarios is widely debated, and rests on a number of unresolved questions about future progress in computer science. Two sources of concern are that a sudden and unexpected "intelligence explosion" might take an unprepared human race by surprise, and that controlling a superintelligent machine (or even instilling it with human-compatible values) may be an even harder problem than naively supposed.
Stuart Russell and Peter Norvig's Artificial Intelligence: A Modern Approach, the standard undergraduate AI textbook, cites the possibility that an AI system's learning function "may cause it to evolve into a system with unintended behavior" as the most serious existential risk from AI technology. Citing major advances in the field of AI and the potential for AI to have enormous long-term benefits or costs, the 2015 Open Letter on Artificial Intelligence stated:
The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.