*** Welcome to piglix ***

Action model learning


Action model learning (sometimes abbreviated action learning) is an area of machine learning concerned with creation and modification of software agent's knowledge about effects and preconditions of the actions that can be executed within its environment. This knowledge is usually represented in logic-based action description language and used as the input for automated planners.

Learning action models is important when goals change. When an agent acted for a while, it can use its accumulated knowledge about actions in the domain to make better decisions. Thus, learning action models differs from reinforcement learning. It enables reasoning about actions instead of expensive trials in the world. Action model learning is a form of inductive reasoning, where new knowledge is generated based on agent's observations. It differs from standard supervised learning in that correct input/output pairs are never presented, nor imprecise action models explicitly corrected.

Usual motivation for action model learning is the fact that manual specification of action models for planners is often a difficult, time consuming, and error-prone task (especially in complex environments).

Given a training set consisting of examples , where are observations of a world state from two consecutive time steps and is an action instance observed in time step , the goal of action model learning in general is to construct an action model , where is a description of domain dynamics in action description formalism like STRIPS, ADL or PDDL and is a probability function defined over the elements of . However, many state of the art action learning methods assume determinism and do not induce . In addition to determinism, individual methods differ in how they deal with other attributes of domain (e.g. partial observability or sensoric noise).


...
Wikipedia

...