*** Welcome to piglix ***

Kahan summation algorithm


In numerical analysis, the Kahan summation algorithm (also known as compensated summation ) significantly reduces the numerical error in the total obtained by adding a sequence of finite precision floating point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors).

In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as for random inputs (the roundoff errors form a random walk). With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.

The algorithm is attributed to William Kahan. Similar, earlier techniques are, for example, Bresenham's line algorithm, keeping track of the accumulated error in integer operations (although first documented around the same time) and the Delta-sigma modulation (integrating, not just summing the error).

In pseudocode, the algorithm is:

This example will be given in decimal. Computers typically use binary arithmetic, but the principle being illustrated is the same. Suppose we are using six-digit decimal floating point arithmetic, sum has attained the value 10000.0, and the next two values of input(i) are 3.14159 and 2.71828. The exact result is 10005.85987, which rounds to 10005.9. With a plain summation, each incoming value would be aligned with sum and many low order digits lost (by truncation or rounding). The first result, after rounding, would be 10003.1. The second result would be 10005.81828 before rounding, and 10005.8 after rounding. This is not correct.


...
Wikipedia

...