*** Welcome to piglix ***

Stopping rule


In probability theory, in particular in the study of , a stopping time (also Markov time) is a specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. A stopping time is often defined by a stopping rule, a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some finite time.

Stopping times occur in decision theory, and the optional stopping theorem is an important result in this context. Stopping times are also frequently applied in mathematical proofs to “tame the continuum of time”, as Chung put it in his book (1982).

A stopping time with respect to a sequence of random variables X1, X2, ... is a random variable τ with values in {1,2,...} and the property that for each t∈{1,2,...}, the occurrence or non-occurrence of the event τ = t depends only on the values of X1, X2, ..., Xt. In some cases, the definition specifies that Pr(τ < ∞) = 1, or that τ be almost surely , although in other cases this requirement is omitted.

Another, more general definition is used for and may be given in terms of a filtration: Let (I, ≤) be an ordered index set (often I = [0, ∞) or a compact subset thereof, thought of as the set of possible "times"), and let be a filtered probability space, i.e. a probability space equipped with a filtration of σ-algebras. Then a random variable  : Ω → I is called a stopping time if for all t in I. Often, to avoid confusion, we call it a -stopping time and explicitly specify the filtration. Speaking intuitively, for to be a stopping time, it should be possible to decide whether or not has occurred on the basis of the knowledge of , i.e., event is -measurable.


...
Wikipedia

...