*** Welcome to piglix ***

Stein's paradox


Stein's example (or phenomenon or paradox), in decision theory and estimation theory, is the phenomenon that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower expected mean squared error) than any method that handles the parameters separately. It is named after Charles Stein of Stanford University, who discovered the phenomenon in 1955.

An intuitive explanation is that optimizing for the mean-squared error of a combined estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent; this occurs in channel estimation in telecommunications, for instance (different factors affect overall channel performance). On the other hand, if one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse.

The following is perhaps the simplest form of the paradox, the special case in which the number of observations is equal to (rather than greater than) the number of parameters to be estimated. Let θ be a vector consisting of n ≥ 3 unknown parameters. To estimate these parameters, a single measurement Xi is performed for each parameter θi, resulting in a vector X of length n. Suppose the measurements are independent, Gaussian random variables, with mean θ and variance 1, i.e.,

Thus, each parameter is estimated using a single noisy measurement, and each measurement is equally inaccurate.

Under such conditions, it is most intuitive (and most common) to use each measurement as an estimate of its corresponding parameter. This so-called "ordinary" decision rule can be written as


...
Wikipedia

...