*** Welcome to piglix ***

Newton's method in optimization


In calculus, Newton's method is an iterative method for finding the roots of a differentiable function f (i.e. solutions to the equation f(x)=0). In optimization, Newton's method is applied to the derivative f of a twice-differentiable function f to find the roots of the derivative (solutions to f ′(x)=0), also known as the stationary points of f.

In the one-dimensional problem, Newton's method attempts to construct a sequence xn from an initial guess x0 that converges towards some value x* satisfying f ′(x*)=0. This x* is a stationary point of f.

The second order Taylor expansion fT(x) of f around xn is:

We want to find Δx such that xn + Δx is a stationary point. We seek to solve the equation that sets the derivative of this last expression with respect to Δx equal to zero:

For the value of Δx = −f ′(xn) / f ″(xn), which is the solution of this equation, it can be hoped that xn+1 = xn + Δx = xnf ′(xn) / f ″(xn) will be closer to a stationary point x*. Provided that f is a twice-differentiable function and other technical conditions are satisfied, the sequence x1, x2, … will converge to a point x* satisfying f ′(x*) = 0.


...
Wikipedia

...