*** Welcome to piglix ***

Kuiper's test


Kuiper's test is used in statistics to test that whether a given distribution, or family of distributions, is contradicted by evidence from a sample of data. It is named after Dutch mathematician Nicolaas Kuiper.

Kuiper's test is closely related to the better-known Kolmogorov–Smirnov test (or K-S test as it is often called). As with the K-S test, the discrepancy statistics D+ and D represent the absolute sizes of the most positive and most negative differences between the two cumulative distribution functions that are being compared. The trick with Kuiper's test is to use the quantity D+ + D as the test statistic. This small change makes Kuiper's test as sensitive in the tails as at the median and also makes it invariant under cyclic transformations of the independent variable. The Anderson–Darling test is another test that provides equal sensitivity at the tails as the median, but it does not provide the cyclic invariance.

This invariance under cyclic transformations makes Kuiper's test invaluable when testing for cyclic variations by time of year or day of the week or time of day, and more generally for testing the fit of, and differences between, circular probability distributions.

The test statistic, V, for Kuiper's test is defined as follows. Let F be the continuous cumulative distribution function which is to be the null hypothesis. Denote the sample of data which are independent realisations of random variables, having F as their distribution function, by xi (i=1,...,n). Then define

and finally,

Tables for the critical points of the test statistic are available, and these include certain cases where the distribution being tested is not fully known, so that parameters of the family of distributions are estimated.


...
Wikipedia

...