In probability theory and statistics, covariance is a measure of how much two variables change together, and the covariance function, or kernel, describes the spatial or temporal covariance of a random variable process or field. For a random field or Z(x) on a domain D, a covariance function C(x, y) gives the covariance of the values of the random field at the two locations x and y:
The same C(x, y) is called the function in two instances: in time series (to denote exactly the same concept except that x and y refer to locations in time rather than in space), and in multivariate random fields (to refer to the covariance of a variable with itself, as opposed to the cross covariance between two different variables at different locations, Cov(Z(x1), Y(x2))).
For locations x1, x2, …, xN ∈ D the variance of every linear combination
can be computed as
A function is a valid covariance function if and only if this variance is non-negative for all possible choices of N and weights w1, …, wN. A function with this property is called positive definite.
In case of a weakly stationary random field, where
for any lag h, the covariance function can be represented by a one-parameter function
which is called a covariogram and also a covariance function. Implicitly the C(xi, xj) can be computed from Cs(h) by:
The positive definiteness of this single-argument version of the covariance function can be checked by Bochner's theorem.
A simple stationary parametric covariance function is the "exponential covariance function"
where V is a scaling parameter, and d=d(x,y) is the distance between two points. Sample paths of a Gaussian process with the exponential covariance function are not smooth. The "squared exponential covariance function"