David A. McAllester | |
---|---|
Born |
Template:1956 United States |
Fields | Computer Science, Artificial Intelligence, Machine Learning |
Institutions | Toyota Technological Institute at Chicago |
Alma mater | MIT |
Doctoral advisor | Gerald Sussman |
Doctoral students | Robert Givan, Jr. |
Known for | Artificial intelligence |
Notable awards |
AAAI Classic Paper Award (2010) International Conference on Logic Programming Test of Time award (2014) |
David A. McAllester (born May 30, 1956) is Professor and Chief Academic Officer at the Toyota Technological Institute at Chicago. He received his B.S., M.S., and Ph.D. degrees from the Massachusetts Institute of Technology in 1978, 1979, and 1987 respectively. His PhD was supervised by Gerald Sussman. He served on the faculty of Cornell University for the academic year of 1987-1988 and on the faculty of MIT from 1988 to 1995. He was a member of technical staff at AT&T Labs-Research from 1995 to 2002. He has been a fellow of the American Association of Artificial Intelligence since 1997. He has authored over 100 refereed publications.
Professor McAllester's research areas include machine learning theory, the theory of programming languages, automated reasoning, AI planning, computer game playing (computer chess), and computational linguistics. A 1991 paper on AI planning proved to be one of the most influential papers of the decade in that area. A 1993 paper on computer game algorithms influenced the design of the algorithms used in the Deep Blue system that defeated Garry Kasparov. A 1998 paper on machine learning theory introduced PAC-Bayesian theorems which combine Bayesian and non-Bayesian methods. His plans for future research are focused on the integration of semantics into statistical approaches to computational linguistics.
McAllester is currently a professor and chief academic officer at the Toyota Technological Institute at Chicago, an accredited research institute closely affiliated with the University of Chicago.
McAllester has voiced concerns about the potential dangers of artificial intelligence, stating in an article to the Pittsburgh Tribune-Review that it is inevitable that fully automated intelligent machines will be able to design and build smarter, better versions of themselves, an event known as the Singularity. The Singularity would enable machines to become infinitely intelligent, and would pose an "incredibly dangerous scenario". McAllester estimates a 10 percent probability of the Singularity occurring within 25 years, and a 90 percent probability of it occurring within 75 years. He appeared on the AAAI Presidential Panel on Long-Term AI Futures in 2009:, and considers the dangers of superintelligent AI worth taking seriously: