*** Welcome to piglix ***

Quasi-likelihood


In statistics, quasi-likelihood estimation is one way of allowing for overdispersion, that is, greater variability in the data than would be expected from the statistical model used. It is most often used with models for count data or grouped binary data, i.e. data that would otherwise be modelled using the Poisson or binomial distribution.

The term quasi-likelihood function was introduced by Robert Wedderburn in 1974 to describe a function which has similar properties to the log-likelihood function, except that a quasi-likelihood function is not the log-likelihood corresponding to any actual probability distribution. Quasi-likelihood models can be fitted using a straightforward extension of the algorithms used to fit generalized linear models.

Instead of specifying a probability distribution for the data, only a relationship between the mean and the variance is specified in the form of a variance function giving the variance as a function of the mean. Generally, this function is allowed to include a multiplicative factor known as the overdispersion parameter or scale parameter that is estimated from the data. Most commonly, the variance function is of a form such that fixing the overdispersion parameter at unity results in the variance-mean relationship of an actual probability distribution such as the binomial or Poisson. (For formulae, see the binomial data example and count data example under generalized linear models.)

Random-effects models, and more generally mixed models (hierarchical models) provide an alternative method of fitting data exhibiting overdispersion using fully specified probability models. However, these methods often become complex and computationally intensive to fit to binary or count data. Quasi-likelihood methods have the advantage of relative computational simplicity, speed and robustness, as they can make use of the more straightforward algorithms developed to fit generalized linear models.


...
Wikipedia

...