BEGIN:VCALENDAR
VERSION:2.0
PRODID:ECMLPKDD-MB
BEGIN:VEVENT
DTSTAMP;TZID=Europe/Dublin:20180826T200000
UID:_ecmlpkdd_406
DTSTART;TZID="Europe/Dublin":20180911T112000
DTEND;TZID="Europe/Dublin":20180911T114000
LOCATION:Hogan Mezz 1
TRANSP:TRANSPARENT
SEQUENCE:1
DESCRIPTION:Kernel methods are a popular choice for classification problems, but when solving large-scale learning tasks computing the quadratic kernel matrix quickly becomes infeasible. To circumvent this problem, the Nyström method that approximates the kernel matrix using only a smaller sample of the kernel matrix has been proposed. Other techniques to speed up kernel learning include stochastic first order optimization and conditioning. We introduce Nyström-SGD, a learning algorithm that trains kernel classifiers by minimizing a convex loss function with conditioned stochastic gradient descent while exploiting the low-rank structure of a Nyström kernel approximation. Our experiments suggest that the Nyström-SGD enables us to rapidly train high-accuracy classifiers for large-scale classification tasks.
SUMMARY:Nyström-SGD: Fast Learning of Kernel-Classifiers with Conditioned Stochastic Gradient Descent
CLASS:PUBLIC
END:VEVENT
END:VCALENDAR