*** Welcome to piglix ***

Parameter estimation


Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.

For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters.

Or, for example, in radar Our aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated.

In estimation theory, two approaches are generally considered.

For example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with a noisy signal. Without randomness, or noise, the problem would be deterministic and estimation would not be needed.

To build a model, several statistical "ingredients" need to be known. These are needed to ensure the estimator has some mathematical tractability.

The first is a set of statistical samples taken from a random vector (RV) of size N. Put into a vector,

Secondly, there are the corresponding M parameters

which need to be established with their continuous probability density function (pdf) or its discrete counterpart, the probability mass function (pmf)

It is also possible for the parameters themselves to have a probability distribution (e.g., Bayesian statistics). It is then necessary to define the Bayesian probability


...
Wikipedia

...