BEGIN:VCALENDAR
VERSION:2.0
PRODID:ECMLPKDD-MB
BEGIN:VEVENT
DTSTAMP;TZID=Europe/Dublin:20180826T200000
UID:_ecmlpkdd_617
DTSTART;TZID="Europe/Dublin":20180913T150000
DTEND;TZID="Europe/Dublin":20180913T152000
LOCATION:Hogan Mezz 2
TRANSP:TRANSPARENT
SEQUENCE:1
DESCRIPTION:We consider the stochastic optimization of finite sums over a Riemannian manifold where the functions are smooth and convex. We present MASAGA, an extension of the stochastic average gradient variant SAGA on Riemannian manifolds. SAGA is a variance-reduction technique that typically outperforms methods that rely on expensive full-gradient calculations, such as the stochastic variance-reduced gradient method. We show that MASAGA achieves a linear convergence rate with uniform sampling, and we further show that MASAGA achieves a faster convergence rate with non-uniform sampling. Our experiments show that MASAGA is faster than the recent Riemannian stochastic gradient descent algorithm for the classic problem of finding the leading eigenvector corresponding to the maximum eigenvalue.
SUMMARY:MASAGA: A Linearly-Convergent Stochastic First-Order Method for Optimization on Manifolds
CLASS:PUBLIC
END:VEVENT
END:VCALENDAR