Skip to main content

Yura Malitsky: Adaptive Gradient Descent without Descent

Abstract: In this talk I will present some recent results for the most classical optimization method — gradient descent. We will show that a simple zero cost rule is sufficient to completely automate gradient descent. The method adapts to the local geometry, with convergence guarantees depending only on the smoothness in a neighborhood of a solution. The presentation is based on a joint work with K. Mishchenko, see https://arxiv.org/abs/1910.09529.

Time: Fri 2021-10-15 11.00 - 12.00

Location: Seminar room 3721

Language: English

Participating: Yura Malitsky, Linköping University

Export to calendar

The seminar will also be available via Zoom Meeting
https://kth-se.zoom.us/j/63658381373

Page responsible:Per Enqvist
Belongs to: Stockholm Mathematics Centre
Last changed: Oct 06, 2021