*** Welcome to piglix ***

Doob's martingale convergence theorems


In mathematics – specifically, in the  – Doob's martingale convergence theorems are a collection of results on the long-time limits of supermartingales, named after the American mathematician Joseph L. Doob.

In the following, (Ω, FFP), F = (Ft)t ≥ 0, will be a filtered probability space and N : [0, +∞) × Ω → R will be a right-continuous supermartingale with respect to the filtration F; in other words, for all 0 ≤ s ≤ t < +∞,

Doob's first martingale convergence theorem provides a sufficient condition for the random variables Nt to have a limit as t → +∞ in a pointwise sense, i.e. for each ω in the sample space Ω individually.

For t ≥ 0, let Nt = max(−Nt, 0) and suppose that

Then the pointwise limit

exists and is finite for P-almost all ω ∈ Ω.

It is important to note that the convergence in Doob's first martingale convergence theorem is pointwise, not uniform, and is unrelated to convergence in mean square, or indeed in any Lp space. In order to obtain convergence in L1 (i.e., convergence in mean), one requires uniform integrability of the random variables Nt. By Chebyshev's inequality, convergence in L1 implies convergence in probability and convergence in distribution.

The following are equivalent:

Let M : [0, +∞) × Ω → R be a continuous martingale such that

for some p > 1. Then there exists a random variable M ∈ Lp(Ω, PR) such that Mt → M as t → +∞ both P-almost surely and in Lp(Ω, PR).


...
Wikipedia

...