Finite-Sample Analysis of LSTD

Alessandro Lazaric 1 Mohammad Ghavamzadeh 1 Remi Munos 1
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal, INRIA Lille - Nord Europe
Abstract : In this paper we consider the problem of policy evaluation in reinforcement learning, i.e., learning the value function of a fixed policy, using the least-squares temporal-difference (LSTD) learning algorithm. We report a finite-sample analysis of LSTD. We first derive a bound on the performance of the LSTD solution evaluated at the states generated by the Markov chain and used by the algorithm to learn an estimate of the value function. This result is general in the sense that no assumption is made on the existence of a stationary distribution for the Markov chain. We then derive generalization bounds in the case when the Markov chain possesses a stationary distribution and is $\beta$-mixing.
Type de document :
Communication dans un congrès
ICML - 27th International Conference on Machine Learning, Jun 2010, Haifa, Israel. pp.615-622, 2010


https://hal.inria.fr/inria-00482189
Contributeur : Mohammad Ghavamzadeh <>
Soumis le : dimanche 9 mai 2010 - 20:42:27
Dernière modification le : dimanche 1 septembre 2013 - 14:10:44

Fichier

lstd-tech.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : inria-00482189, version 1

Collections

Citation

Alessandro Lazaric, Mohammad Ghavamzadeh, Remi Munos. Finite-Sample Analysis of LSTD. ICML - 27th International Conference on Machine Learning, Jun 2010, Haifa, Israel. pp.615-622, 2010. <inria-00482189>

Exporter

Partager

Métriques

Consultation de
la notice

401

Téléchargement du document

174

  翻译: