Extremum Seeking (ES) is a model-free, online or offline, optimization algorithm that approximates gradient descent. ES estimates the gradient of a system by modulating the input to a system, then filtering and demodulation of an objective function value based on system output.
This project contains three implementations of extremum seeking, meant to be plug-and-play into simulations or real-world applications: 1D-ES, 2D-ES, ND-ES (multi-dimensional). Multiple ES instances of any type can be used in conjunction. Jupyter notebooks contain several simulation examples of the ES implementations on static and dynamic systems.
This section covers the operation of a single 1D-ES algorithm.
ES approximates gradient descent by steering its optimizer estimate,
ES operates as follows:
The system input is formed by adding a sinusoidal perturbation
The system input
Either a central entity computes the objective function value
A high-pass filter removes low-frequency content from
This value is demodulated by multiplying by
A low-pass filter removes high-frequency content from this value, giving the gradient estimate,
The gradient estimate is integrated back into the setpoint (optimizer estimate), updating
This section describes the parameters of an instance of 1D-ES with sinusoidal perturbation.
The perturbation frequency
The perturbation amplitude
The integerator gain
The high-pass filter frequency determines what frequency content passes through the high-pass filter. This value is typically set to
The low-pass filter frequency determines what frequency content passes through the low-pass filter. The low-pass filter attenuates content in the demodulated signal, and typically is used to attenuate content such as that due to the perturbation. This value is typically set to
This section details the inner workings of the ES algorithm.
1D-ES with element descriptions. 1D-ES with element equations.Here, we define
The ES algorithm received the objective function value
The objective function value,
Continuous representation:
A pertinent value is
Continuous representation:
The high-pass filtered objective function value
Continuous representation:
The demodulated value passes through a low-pass filter to remove sinsoidal and other high frequency content, giving an estimate of the gradient of the objective function
Continuous representation:
The ES algotrithm then integrates its gradient estimate, scaled by a gain
Continuous representation:
The ES algotrithm adds the perturbation to its setpointto update its setpoint
Continuous representation:
This section gives a derivation of how the gradient is estimated by ES.
Here, we take advantage of second order Taylor Expansion, where:
Using a first order Taylor Expansion, the objective function can be expressed in two parts, a portion due to the setpoint, and a portion due to the perturbation:
The high-pass filter removes "slow" and low frequency content, such as the portion of the objective function due to the setpoint:
This signal is demodulated by multiplying by
This low-pass filter removes the sinusoidal component from the demodulated signal, leaving an estimate of the gradient of the objective function with respect to the setpoint: