*** Welcome to piglix ***

Back-propagation


Backpropagation is a method used in artificial neural networks to calculate the error contribution of each neuron after a batch of data (in image recognition, multiple images) is processed. This is used by an enveloping optimization algorithm to adjust the weight of each neuron, completing the learning process for that case.

Technically it calculates the gradient of the loss function. It is commonly used in the gradient descent optimization algorithm. It is also called backward propagation of errors, because the error is calculated at the output and distributed back through the network layers.

Backpropagation requires a known, desired output for each input value—it is therefore considered to be a supervised learning method (although it is used in some unsupervised networks such as autoencoders).

Backpropagation is a generalization of the delta rule to multi-layered feedforward networks, made possible by using the chain rule to iteratively compute gradients for each layer. The backpropagation algorithm has been repeatedly rediscovered and is a special case of a more general technique called automatic differentiation in reverse accumulation mode. It is closely related to the Gauss–Newton algorithm, and is part of continuing research in neural backpropagation.

Backpropagation can be used with any gradient-based optimizer, such as L-BFGS or truncated Newton.

Backpropagation is sometimes referred to as deep learning, a term used to describe neural networks with more than one hidden layer (layers not dedicated to input or output).

The goal of any supervised learning algorithm is to find a function that best maps a set of inputs to their correct output. An example would be a classification task, where the input is an image of an animal, and the correct output is the name of the animal.


...
Wikipedia

...