Henrik Hult: On large deviations for stochastic approximations
On 2022-11-09 kl 15.15 - 16.00
Albano, Cramer room
Henrik Hult (KTH)
Abstract Stochastic approximation is a general and useful random iterative root finding algorithm originating from the work of Robbins and Monro in the 1950s. Many popular training algorithms in machine learning can be formulated as stochastic approximations, including stochastic gradient descent, reinforcement learning, contrastive divergence, adaptive MCMC, and various adapted extended ensamble methods such as Wang-Landau and accelerated weight histograms. In this talk we will present on-going work on large deviations for stochastic approximations and provide a new representation of the rate function. An interpretation that learning algorithms can forget will be discussed and the rate function reveals how the forgetting occurs. The talk is based on joint work with Adam Lindhe, Pierre Nyquist and Guo-Jhen Wu.