#### Filter Results:

- Full text PDF available (17)

#### Publication Year

2011

2017

- This year (4)
- Last 5 years (15)
- Last 10 years (18)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Mohammad Gheshlaghi Azar, Hilbert J. Kappen
- Journal of Machine Learning Research
- 2012

In this paper, we propose a novel policy iteration method, called dynamic policy programming (DPP), to estimate the optimal policy in the infinite-horizon Markov decision processes. DPP is an incremental algorithm that forces a gradual change in policy update. This allows to prove finite-iteration and asymptotic ℓ ∞-norm performance-loss bounds in the… (More)

- Mohammad Gheshlaghi Azar, Vicenç Gómez, Hilbert J. Kappen
- AISTATS
- 2011

In this paper, we consider the problem of planning in the infinite-horizon discounted-reward Markov decision problems. We propose a novel iterative method, called dynamic policy programming (DPP), which updates the parametrized policy by a Bellman-like iteration. For discrete state-action case, we establish sup-norm loss bounds for the performance of the… (More)

In this paper we consider the problem of online stochastic optimization of a locally smooth function under bandit feedback. We introduce the high-confidence tree (HCT) algorithm, a novel anytime X-armed bandit algorithm, and derive regret bounds matching the performance of existing state-of-the-art in terms of dependency on number of steps and smoothness… (More)

- Mohammad Gheshlaghi Azar, Alessandro Lazaric, Emma Brunskill
- ECML/PKDD
- 2013

In some reinforcement learning problems an agent may be provided with a set of input policies, perhaps learned from prior experience or provided by advisors. We present a reinforcement learning with policy advice (RLPA) algorithm which leverages this input set and learns to use the best policy in the set for the reinforcement learning task at hand. We prove… (More)

We introduce a new convergent variant of Q-learning, called speedy Q-learning, in order to address the problem of slow convergence in the standard form of the Q-learning algorithm. We prove a PAC bound on the performance of SQL, which shows that only T = O log(1/δ)ǫ −2 (1 − γ) −4 steps are required for the SQL algorithm to converge to an ǫ-optimal… (More)

We consider the problem of learning the optimal action-value function in the discounted-reward Markov decision processes (MDPs). We prove a new PAC bound on the sample-complexity of model-based value iteration algorithm in the presence of the generative model, which indicates that for an MDP with N state-action pairs and the discount factor γ ∈ [0, 1) only… (More)

HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt età la diffusion… (More)

- Mohammad Gheshlaghi Azar, Rémi Munos, Hilbert J. Kappen
- Machine Learning
- 2013

We consider the problems of learning the optimal action-value function and the optimal policy in discounted-reward Markov decision processes (MDPs). We prove new PAC bounds on the sample-complexity of two well-known model-based reinforcement learning (RL) algorithms in the presence of a generative model of the MDP: value iteration and policy iteration. The… (More)

- Mohammad Gheshlaghi Azar, Ian Osband, Rémi Munos
- ICML
- 2017

We consider the problem of efficient exploration in finite horizon MDPs. We show that an optimistic modification to model-based value iteration, can achieve a regret bound O(√ HSAT +H 2 S 2 A+H √ T) where H is the time horizon, S the number of states, A the number of actions and T the time elapsed. This result improves over the best previous known bound… (More)