*** Welcome to piglix ***

Adaptive control


Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. Adaptive control is different from robust control in that it does not need a priori information about the bounds on these uncertain or time-varying parameters; robust control guarantees that if the changes are within given bounds the control law need not be changed, while adaptive control is concerned with control law changing themselves.

The foundation of adaptive control is parameter estimation. Common methods of estimation include recursive least squares and gradient descent. Both of these methods provide update laws which are used to modify estimates in real time (i.e., as the system operates). Lyapunov stability is used to derive these update laws and show convergence criterion (typically persistent excitation). Projection (mathematics) and normalization are commonly used to improve the robustness of estimation algorithms. It is also called adjustable control.

In general one should distinguish between:

as well as between

Direct methods are ones wherein the estimated parameters are those directly used in the adaptive controller. In contrast, indirect methods are those in which the estimated parameters are used to calculate required controller parameters

There are several broad categories of feedback adaptive control (classification can vary):

Some special topics in adaptive control can be introduced as well:

Adaptive control has even been merged with intelligent techniques such as fuzzy and neural networks and the new terms like fuzzy adaptive control has been generated.

When designing adaptive control systems, special consideration is necessary of and robustness issues. Lyapunov stability is typically used to derive control adaptation laws and show convergence.

Typical applications of adaptive control are (in general):

Usually these methods adapt the controllers to both the process statics and dynamics. In special cases the adaptation can be limited to the static behavior alone, leading to adaptive control based on characteristic curves for the steady-states or to extremum value control, optimizing the steady state. Hence, there are several ways to apply adaptive control algorithms.


...
Wikipedia

...