#### Filter Results:

- Full text PDF available (133)

#### Publication Year

2000

2017

- This year (6)
- Last 5 years (62)
- Last 10 years (109)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Pierre Geurts, Damien Ernst, Louis Wehenkel
- Machine Learning
- 2006

This paper proposes a new tree-based ensemble method for supervised classification and regression problems. It essentially consists of randomizing strongly both attribute and cut-point choice while splitting a tree node. In the extreme case, it builds totally randomized trees whose structures are independent of the output values of the learning sample. The… (More)

- Damien Ernst, Pierre Geurts, Louis Wehenkel
- Journal of Machine Learning Research
- 2005

Reinforcement learning aims to determine an optimal control policy from interaction with a system or from observations gathered from a system. In batch mode, it can be achieved by approximating the so-called Q-function based on a set of four-tuples (xt ,ut ,rt ,xt+1) where xt denotes the system state at time t, ut the control action taken, rt the… (More)

This paper addresses the problem of computing optimal structured treatment interruption strategies for HIV infected patients. We show that reinforcement learning may be useful to extract such strategies directly from clinical data, without the need of an accurate mathematical model of HIV infection dynamics. To support our claims, we report simulation… (More)

Reinforcement learning is a promising paradigm for learning optimal control. We consider policy iteration (PI) algorithms for reinforcement learning, which iteratively evaluate and improve control policies. State-of-the-art, least-squares techniques for policy evaluation are sample-efficient and have relaxed convergence requirements. However, they are… (More)

In this paper we explain how to design intelligent agents able to process the information acquired from interaction with a system to learn a good control policy and show how the methodology can be applied to control some devices aimed to damp electrical power oscillations. The control problem is formalized as a discrete-time optimal control problem and the… (More)

- Raphaël Fonteneau, Susan A. Murphy, Louis Wehenkel, Damien Ernst
- AISTATS
- 2010

We propose an algorithm for estimating the finite-horizon expected return of a closed loop control policy from an a priori given (off-policy) sample of one-step transitions. It averages cumulated rewards along a set of “broken trajectories” made of one-step transitions selected from the sample on the basis of the control policy. Under some Lipschitz… (More)

- Raphaël Fonteneau, Susan A. Murphy, Louis Wehenkel, Damien Ernst
- Annals OR
- 2013

In this paper, we consider the batch mode reinforcement learning setting, where the central problem is to learn from a sample of trajectories a policy that satisfies or optimizes a performance criterion. We focus on the continuous state space case for which usual resolution schemes rely on function approximators either to represent the underlying control… (More)

- Damien Ernst, Mevludin Glavic, Florin Capitanescu, Louis Wehenkel
- IEEE Trans. Systems, Man, and Cybernetics, Part B
- 2009

This paper compares reinforcement learning (RL) with model predictive control (MPC) in a unified framework and reports experimental results of their application to the synthesis of a controller for a nonlinear and deterministic electrical power oscillations damping problem. Both families of methods are based on the formulation of the control problem as a… (More)

In this paper, we explore how a computational approach to learning from interactions, called reinforcement learning (RL), can be applied to control power systems. We describe some challenges in power system control and discuss how some of those challenges could be met by using these RL methods. The difficulties associated with their application to control… (More)