Multiplicative weight update method is a meta-algorithm. It is an algorithmic technique which "maintains a distribution on a certain set of interest, and updates it iteratively by multiplying the probability mass of elements by suitably chosen factors based on feedback obtained by running another algorithm on the distribution". It was discovered repeatedly in very diverse fields such as machine learning (AdaBoost, Winnow, Hedge), optimization (solving LPs), theoretical computer science (devising fast algorithm for LPs and SDPs), and game theory.
"Multiplicative weights" implies the iterative rule used in algorithms derived from the Multiplicative Weight Update Method. It is given with different names in the different fields where it was discovered or rediscovered.
The earliest known version of this technique was in an algorithm named "Fictitious Play" which was proposed in game theory in the early 1950s. Grigoriadis and Khachiyan applied a randomized variant of "Fictitious Play" to solve two-player zero-sum games efficiently using the multiplicative weights algorithm. In this case, player allocates higher weight to the actions that had a better outcome and choose his strategy relying on these weights. In machine learning, Littlestone applied the earliest form of the multiplicative weights update rule in his famous Winnow Algorithm, which is similar to Minsky and Papert's earlier perceptron learning algorithm.Later, he generalized the Winnow Algorithm to Weighted Majority Algorithm. Freund and Schapire followed his steps and generalized the Winnow Algorithm in the form of Hedge Algorithm.
The Multiplicative weights algorithm is also widely applied in computational geometry such as Clarkson's algorithm for linear programming (LP) with a bounded number of variables in linear time. Later, Bronnimann and Goodrich employed analogous methods to find Set Covers for hypergraphs with small VC dimension.