In probability theory, the de Moivre–Laplace theorem, which is a special case of the central limit theorem, states that the normal distribution may be used as an approximation to the binomial distribution under certain conditions. In particular, the theorem shows that the probability mass function of the random number of "successes" observed in a series of n independent Bernoulli trials, each having probability p of success (a binomial distribution with n trials), converges to the probability density function of the normal distribution with mean np and standard deviation √np(1-p), as n grows large, assuming p is not 0 or 1.
The theorem appeared in the second edition of The Doctrine of Chances by Abraham de Moivre, published in 1738. Although de Moivre did not use the term "Bernoulli trials", he wrote about the probability distribution of the number of times "heads" appears when a coin is tossed 3600 times.
This is one derivation of the particular Gaussian function used in the normal distribution.
As n grows large, for k in the neighborhood of np we can approximate
in the sense that the ratio of the left-hand side to the right-hand side converges to 1 as n → ∞.
Note that k cannot be fixed or it would quickly fall outside the range of interest as n → ∞. What is needed is to let k vary but always be a fixed number of standard deviations from the mean, so that it is always associated with the same point on the standard normal distribution. We can do this by defining