*** Welcome to piglix ***

Scale-space implementation

Scale space
Scale-space axioms
Scale-space implementation
Feature detection
Edge detection
Blob detection
Corner detection
Ridge detection
Interest point detection
Scale selection
Affine shape adaptation
Scale-space segmentation
Axiomatic theory of receptive fields

The linear scale-space representation of an N-dimensional continuous signal,

is obtained by convolving fC with an N-dimensional Gaussian kernel:

In other words:

However, for implementation, this definition is impractical, since it is continuous. When applying the scale space concept to a discrete signal fD, different approaches can be taken. This article is a brief summary of some of the most frequently used methods.

Using the separability property of the Gaussian kernel

the N-dimensional convolution operation can be decomposed into a set of separable smoothing steps with a one-dimensional Gaussian kernel G along each dimension

where

and the standard deviation of the Gaussian σ is related to the scale parameter t according to t = σ2.

Separability will be assumed in all that follows, even when the kernel is not exactly Gaussian, since separation of the dimensions is the most practical way to implement multidimensional smoothing, especially at larger scales. Therefore, the rest of the article focuses on the one-dimensional case.

When implementing the one-dimensional smoothing step in practice, the presumably simplest approach is to convolve the discrete signal fD with a sampled Gaussian kernel:

where

(with t = σ2) which in turn is truncated at the ends to give a filter with finite impulse response

for M chosen sufficiently large (see error function) such that

A common choice is to set M to a constant C times the standard deviation of the Gaussian kernel

where C is often chosen somewhere between 3 and 6.

Using the sampled Gaussian kernel can, however, lead to implementation problems, in particular when computing higher-order derivatives at finer scales by applying sampled derivatives of Gaussian kernels. When accuracy and robustness are primary design criteria, alternative implementation approaches should therefore be considered.

For small values of ε (10−6 to 10−8) the errors introduced by truncating the Gaussian are usually negligible. For larger values of ε, however, there are many better alternatives to a rectangular window function. For example, for a given number of points, a Hamming window, Blackman window, or Kaiser window will do less damage to the spectral and other properties of the Gaussian than a simple truncation will. Nonwithstanding this, since the Gaussian kernel decreases rapidly at the tails, the main recommendation is still to use a sufficiently small value of ε such that the truncation effects are no longer important.


...
Wikipedia

...