In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator.
While the delta method generalizes easily to a multivariate setting, careful motivation of the technique is more easily demonstrated in univariate terms. Roughly, if there is a sequence of random variables Xn satisfying
where θ and σ2 are finite valued constants and denotes convergence in distribution, then
for any function g satisfying the property that g′(θ) exists and is non-zero valued.
Demonstration of this result is fairly straightforward under the assumption that g′(θ) is continuous. To begin, we use the mean value theorem (i.e.: the first order approximation of a Taylor series using Taylor's theorem):
where lies between Xn and θ. Note that since and , it must be that and since g′(θ) is continuous, applying the continuous mapping theorem yields