In economics, a Taylor rule is a reduced form approximation of the responsiveness of the nominal interest rate, as set by the central bank, to changes in inflation, output, or other economic conditions. In particular, the rule describes how, for each one-percent increase in inflation, the central bank tends to raise the nominal interest rate by more than one percentage point. This aspect of the rule is often called the Taylor principle. It should be noted that while such rules may serve as concise, descriptive proxies for central bank policy, and are not explicitly proscriptively considered by central banks when setting nominal rates.
The rule was first proposed by John B. Taylor, and simultaneously by Dale W. Henderson and Warwick McKibbin in 1993. It is intended to foster price stability and full employment by systematically reducing uncertainty and increasing the credibility of future actions by the central bank. It may also avoid the inefficiencies of time inconsistency from the exercise of discretionary policy. The Taylor rule synthesized, and provided a compromise between, competing schools of economics thought in a language devoid of rhetorical passion. Although many issues remain unresolved and views still differ about how the Taylor rule can best be applied in practice, research shows that the rule has advanced the practice of central banking.
According to Taylor's original version of the rule, the nominal interest rate should respond to divergences of actual inflation rates from target inflation rates and of actual Gross Domestic Product (GDP) from potential GDP:
In this equation, is the target short-term nominal interest rate (e.g. the federal funds rate in the US, the Bank of England base rate in the UK), is the rate of inflation as measured by the GDP deflator, is the desired rate of inflation, is the assumed equilibrium real interest rate, is the logarithm of real GDP, and is the logarithm of potential output, as determined by a linear trend.