AIXI ['ai̯k͡siː] is a theoretical mathematical formalism for artificial general intelligence. It combines Solomonoff induction with sequential decision theory. AIXI was first proposed by Marcus Hutter in 2000 and the results below are proved in Hutter's 2005 book Universal Artificial Intelligence.
AIXI is a reinforcement learning agent; it maximizes the expected total rewards received from the environment. Intuitively, it simultaneously considers every computable hypothesis. In each time step, it looks at every possible program and evaluates how many rewards that program generates depending on the next action taken. The promised rewards are then weighted by the subjective belief that this program constitutes the true environment. This belief is computed from the length of the program: longer programs are considered less likely, in line with Occam's razor. AIXI then selects the action that has the highest expected total reward in the weighted sum of all these programs.
The AIXI agent interacts sequentially with some (stochastic and unknown to AIXI) environment . In step t, the agent outputs an action and the environment responds with an observation and a reward distributed according to the conditional probability . Then this cycle repeats for t + 1. The agent tries to maximize cumulative future reward for a fixed lifetime m.