Michael I. Jordan | |
---|---|
Born | February 25, 1956 |
Residence | Berkeley, CA |
Institutions |
University of California, Berkeley University of California, San Diego Massachusetts Institute of Technology |
Alma mater | University of California, San Diego |
Thesis | The Learning of Representations for Sequential Performance (1985) |
Doctoral advisor |
David Rumelhart Donald Norman |
Notable students |
|
Known for | Latent Dirichlet allocation |
Notable awards | Fellow of the U.S. National Academy of Sciences AAAI Fellow (2002) Rumelhart Prize (2015) IJCAI Award for Research Excellence (2016) |
Website www |
Michael Irwin Jordan is an American scientist, Professor at the University of California, Berkeley and leading researcher in machine learning, statistics, and artificial intelligence.
Jordan received his BS magna cum laude in Psychology in 1978 from the Louisiana State University, his MS in Mathematics in 1980 from Arizona State University and his PhD in Cognitive Science in 1985 from the University of California, San Diego. At the University of California, San Diego Jordan was a student of David Rumelhart and a member of the PDP Group in the 1980s.
Jordan is currently a full professor at the University of California, Berkeley where his appointment is split across the Department of Statistics and the Department of EECS. He was a professor at MIT from 1988-1998.
In the 1980s Jordan started developing recurrent neural networks as a cognitive model. In recent years, though, his work is less driven from a cognitive perspective and more from the background of traditional statistics.
He popularised Bayesian networks in the machine learning community and is known for pointing out links between machine learning and statistics. Jordan was also prominent in the formalisation of variational methods for approximate inference and the popularisation of the expectation-maximization algorithm in machine learning.
In 2001, Michael Jordan and others resigned from the Editorial Board of Machine Learning. In a public letter, they argued for less restrictive access and pledged support for a new open access journal, the Journal of Machine Learning Research (JMLR), which was created by Leslie Kaelbling to support the evolution of the field of machine learning.