Kevin Van Vaerenbergh

Learn More
A common approach when applying reinforcement learning to address control problems is that of first learning a policy based on an approximated model of the plant, whose behavior can be quickly and safely explored in simulation; and then implementing the obtained policy to control the actual plant. Here we follow this approach to learn to engage a(More)
Many industrial problems are inherently multi-objective, and require special attention to find different trade-off solutions. Typical multi-objective approaches calculate a scalarization of the different objectives and subsequently optimize the problem using a single-objective optimization method. Several scalarization techniques are known in the(More)
Heating a home is an energy consuming task. Most thermostats are programmed to turn on the heating at a particular time in order to reach and maintain a predefined target temperature. A lot of energy is wasted if these thermostats are not configured optimally since most of these thermostats do not take energy consumption into account but are only concerned(More)
In most existing motion control algorithms, a reference trajectory is tracked, based on a continuous measurement of the system’s response. In many industrial applications, however, it is either not possible or too expensive to install sensors which measure the system’s output over the complete stroke: instead, the motion can only be detected at certain(More)
Many of the standard optimization algorithms focus on optimizing a single, scalar feedback signal. However, real-life optimization problems often require a simultaneous optimization of more than one objective. In this paper, we propose a multi-objective extension to the standard X -armed bandit problem. As the feedback signal is now vector-valued, the goal(More)
  • 1