In statistics, Fisher consistency, named after Ronald Fisher, is a desirable property of an estimator asserting that if the estimator were calculated using the entire population rather than a sample, the true value of the estimated parameter would be obtained.
Suppose we have a statistical sample X1, ..., Xn where each Xi follows a cumulative distribution Fθ which depends on an unknown parameter θ. If an estimator of θ based on the sample can be represented as a functional of the empirical distribution function F̂n:
the estimator is said to be Fisher consistent if:
As long as the Xi are exchangeable, an estimator T defined in terms of the Xi can be converted into an estimator T′ that can be defined in terms of F̂n by averaging T over all permutations of the data. The resulting estimator will have the same expected value as T and its variance will be no larger than that of T.
If the strong law of large numbers can be applied, the empirical distribution functions F̂n converge pointwise to Fθ, allowing us to express Fisher consistency as a limit — the estimator is Fisher consistent if
Suppose our sample is obtained from a finite population Z1, ..., Zm. We can represent our sample of size n in terms of the proportion of the sample ni / n taking on each value in the population. Writing our estimator of θ as T(n1 / n, ..., nm / n), the population analogue of the estimator is T(p1, ..., pm), where pi = P(X = Zi). Thus we have Fisher consistency if T(p1, ..., pm) = θ.