#### Filter Results:

- Full text PDF available (14)

#### Publication Year

2012

2017

- This year (1)
- Last 5 years (15)
- Last 10 years (15)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Julien Audiffren, Hachem Kadri
- ACML
- 2013

We study the stability properties of nonlinear multi-task regression in reproducing Hilbert spaces with operator-valued kernels. Such kernels, a.k.a. multi-task kernels, are appropriate for learning problems with nonscalar outputs like multi-task learning and structured output prediction. We show that multi-task kernel regression algorithms are uniformly… (More)

- Hachem Kadri, Emmanuel Duflos, Philippe Preux, Stéphane Canu, Alain Rakotomamonjy, Julien Audiffren
- Journal of Machine Learning Research
- 2016

In this paper we consider the problems of supervised classification and regression in the case where attributes and labels are functions: a data is represented by a set of functions, and the label is also a function. We focus on the use of reproducing kernel Hilbert space theory to learn from such functional data. Basic concepts and properties of… (More)

- Julien Audiffren, Hachem Kadri
- ArXiv
- 2013

We consider the problem of learning a vector-valued function f in an online learning setting. The function f is assumed to lie in a reproducing Hilbert space of operator-valued kernels. We describe two online algorithms for learning f while taking into account the output structure. A first contribution is an algorithm, ONORMA, that extends the standard… (More)

We consider the accumulation of deleterious mutations in an asexual population, a phenomenon known as Muller’s ratchet, using the continuous time model proposed in [4]. We show that for any parameter λ > 0 ( the rate at which mutations occur), for any α > 0 ( the toxicity of the mutations) and for any size N > 0 of the population, the ratchet clicks a.s. in… (More)

A popular approach to apprenticeship learning (AL) is to formulate it as an inverse reinforcement learning (IRL) problem. The MaxEnt-IRL algorithm successfully integrates the maximum entropy principle into IRL and unlike its predecessors, it resolves the ambiguity arising from the fact that a possibly large number of policies could match the expert’s… (More)

- Julien Audiffren, Liva Ralaivola
- NIPS
- 2015

We study the restless bandit problem where arms are associated with stationary φ-mixing processes and where rewards are therefore dependent: the question that arises from this setting is that of carefully recovering some independence by ‘ignoring’ the values of some rewards. As we shall see, the bandit problem we tackle requires us to address the… (More)

- Julien Audiffren, Emile Contal
- Sensors
- 2016

During the past few years, the Nintendo Wii Balance Board (WBB) has been used in postural control research as an affordable but less reliable replacement for laboratory grade force platforms. However, the WBB suffers some limitations, such as a lower accuracy and an inconsistent sampling rate. In this study, we focus on the latter, namely the non uniform… (More)

- Julien Audiffren, Hachem Kadri
- ArXiv
- 2014

The purpose of this paper is to introduce a concept of equivalence between machine learning algorithms. We define two notions of algorithmic equivalence, namely, weak and strong equivalence. These notions are of paramount importance for identifying when learning properties from one learning algorithm can be transferred to another. Using regularized kernel… (More)

- Julien Audiffren, Ralaivola Liva
- 2016

We adress the problem of dueling bandits defined on partially ordered sets, or posets. In this setting, arms may not be comparable, and there may be several (incomparable) optimal arms. We propose an algorithm, UnchainedBandits, that efficiently finds the set of optimal arms of any poset even when pairs of comparable arms cannot be distinguished from pairs… (More)

- Julien Audiffren, Liva Ralaivola
- ArXiv
- 2014

We study the bandit problem where arms are associated with stationary φ-mixing processes and where rewards are therefore dependent: the question that arises from this setting is that of recovering some independence by ignoring the value of some rewards. As we shall see, the bandit problem we tackle requires us to address the… (More)