Muller's method is a root-finding algorithm, a numerical method for solving equations of the form f(x) = 0. It was first presented by David E. Muller in 1956.
Muller's method is based on the secant method, which constructs at every iteration a line through two points on the graph of f. Instead, Muller's method uses three points, constructs the parabola through these three points, and takes the intersection of the x-axis with the parabola to be the next approximation.
Muller's method is a recursive method which generates an approximation of the root ξ of f at each iteration. Starting with the three initial values x0, x−1 and x−2, the first iteration calculates the first approximation x1, the second iteration calculates the second approximation x2, the third iteration calculates the third approximation x3, etc. Hence the kth iteration generates approximation xk. Each iteration takes as input the last three generated approximations and the value of f at these approximations. Hence the kth iteration takes as input the values xk-1, xk-2 and xk-3 and the function values f(xk-1), f(xk-2) and f(xk-3). The approximation xk is calculated as follows.
A parabola yk(x) is constructed which goes through the three points (xk-1, f(xk-1)), (xk-2, f(xk-2)) and (xk-3, f(xk-3)). When written in the Newton form, yk(x) is
where f[xk-1, xk-2] and f[xk-1, xk-2, xk-3] denote divided differences. This can be rewritten as