BEGIN:VCALENDAR
VERSION:2.0
PRODID:ECMLPKDD-MB
BEGIN:VEVENT
DTSTAMP;TZID=Europe/Dublin:20180826T200000
UID:_ecmlpkdd_MACH-D-17-00376
DTSTART;TZID="Europe/Dublin":20180913T160000
DTEND;TZID="Europe/Dublin":20180913T162000
LOCATION:Hogan Mezz 2
TRANSP:TRANSPARENT
SEQUENCE:1
DESCRIPTION:In this paper, we provide two new stable online algorithms for the problem of prediction in reinforcement learning, \emph{i.e.}, estimating the value function of a model-free Markov reward process using the linear function approximation architecture and with memory and computation costs scaling quadratically in the size of the feature set. The algorithms employ the multi-timescale stochastic approximation variant of the very popular cross entropy (CE) optimization method which is a model based search method to find the global optimum of a real-valued function. A proof of convergence of the algorithms using the ODE method is provided. We supplement our theoretical results with experimental comparisons. The algorithms achieve good performance fairly consistently on many RL benchmark problems. This demonstrates the competitiveness of our algorithms against least squares and other state-of-the-art algorithms in terms of computational efficiency, accuracy and stability.
SUMMARY:An Online Prediction Algorithm for Reinforcement Learning with Linear Function Approximation using Cross Entropy Method
CLASS:PUBLIC
END:VEVENT
END:VCALENDAR