A post by Ettore Fincato, PhD student on the Compass programme.
This post provides an introduction to Gradient Methods in Stochastic Optimisation. This class of algorithms is the foundation of my current research work with Prof. Christophe Andrieu and Dr. Mathieu Gerber, and finds applications in a great variety of topics, such as regression estimation, support vector machines, convolutional neural networks.
We can see below a simulation by Emilien Dupont (https://emiliendupont.github.io/) which represents two trajectories of an optimisation process of a time-varying function. This well describes the main idea behind the algorithms we will be looking at, that is, using the (stochastic) gradient of a (random) function to iteratively reach the optimum.
Stochastic Optimisation
Stochastic optimisation was introduced by [1], and its aim is to find a scheme for solving equations of the form given “noisy” measurements of [2].
In the simplest deterministic framework, one can fully determine the analytical form of , knows that it is differentiable and admits an unique minimum – hence the problem
is well defined and solved by .
On the other hand, one may not be able to fully determine because his experiment is corrupted by a random noise. In such cases, it is common to identify this noise with a random variable, say , consider an unbiased estimator s.t. and to rewrite the problem as