*** Welcome to piglix ***

High-dimensional data


In statistical theory, the field of high-dimensional statistics studies data whose dimension is larger than dimensions considered in classical multivariate analysis. High-dimensional statistics relies on the theory of random vectors. In many applications, the dimension of the data vectors may be larger than the sample size.

Traditionally, statistical inference considers a probability model for a population and considers data that arose as a sample from the population. For many problems, the estimates of the population characteristics ("parameters") can be substantially refined (in theory) as the sample size increases toward infinity. A traditional requirement of estimators is consistency, that is, the convergence to the unknown true value of the parameter.

In 1968, A.N.Kolmogorov proposed another setting of statistical problems and another setting for the asymptotics, in which the dimension of variables p increases along with the sample size n so that the ratio p/n tends to a constant. It was called the “increasing dimension asymptotics” or “the Kolmogorov asymptotics” Kolmogorov's approach makes it possible to isolate many principal terms of error probabilities and of standard measures of the quality of estimators (quality functions) for large p and n.

Recently, researchers are more interested in even larger dimension cases, e.g. , where . These cases emerge from the need of extracting meaningful information from many different areas. In these cases, some interesting results have been found. For example, Student t-test calibration may be invalid when the dimension . For details, see also Šidák correction for infinitely many t-test.


...
Wikipedia

...