Course: Stochastic variance-reduced optimization algorithms and applications to federated learning (L. Condat)
SOCN Graduate School course announcement:
Stochastic variance-reduced optimization algorithms and applications to federated learning
Laurent Condat (KAUST)
October 7-9, UCLouvain, Louvain-la-Neuve
Stochastic optimization algorithms such as Stochastic Gradient Descent (SGD) are at the heart of the tremendous success of large-scale machine learning and artificial intelligence. Indeed, stochastic algorithms are capable of greatly exceeding the intrinsic limitations of deterministic algorithms. In a new set of slides prepared for this course, I will present the latest developments in stochastic algorithms with variance-reduction, i.e., mechanisms for reducing or even canceling the oscillations due to randomness. I will present well-suited algorithms, sometimes with optimal complexity, for various classes of problems, including convex and non-convex settings, distributed and federated optimization.