#### Filter Results:

- Full text PDF available (208)

#### Publication Year

1985

2017

- This year (5)
- Last 5 years (83)
- Last 10 years (176)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Brain Region

#### Data Set Used

#### Key Phrases

#### Method

#### Organism

Learn More

Cette thèse aborde de façon générale les algorithmes d'apprentissage, avec un intérêt tout particulier pour les grandes bases de données. Après avoir for-mulé leprobì eme de l'apprentissage demanì ere mathématique, nous présentons plusieurs algorithmes d'apprentissage importants, en particulier les Multi Layer Perceptrons, les Mixture d'Experts ainsi que… (More)

- Ludovic Denoyer, Patrick Gallinari
- SIGIR Forum
- 2006

Wikipedia is a well known free content, multilingual encyclopedia written collaboratively by contributors around the world. Anybody can edit an article using a wiki markup language that offers a simplified alternative to HTML. This encyclopedia is composed of millions of articles in different languages.

- Antoine Bordes, Léon Bottou, Patrick Gallinari
- Journal of Machine Learning Research
- 2009

The SGD-QN algorithm is a stochastic gradient descent algorithm that makes careful use of second-order information and splits the parameter update into independently scheduled components. Thanks to this design, SGD-QN iterates nearly as fast as a first-order stochastic gradient descent but requires less iterations to achieve the same accuracy. This… (More)

- Antoine Bordes, Léon Bottou, Patrick Gallinari, Jason Weston
- ICML
- 2007

Optimization algorithms for large margin multiclass recognizers are often too costly to handle ambitious problems with structured outputs and exponential numbers of classes. Optimization algorithms that rely on the full gradient are not effective because, unlike the solution, the gradient is not sparse and is very large. The LaRank algorithm sidesteps this… (More)

- Nicolas Usunier, David Buffoni, Patrick Gallinari
- ICML
- 2009

In ranking with the pairwise classification approach, the loss associated to a predicted ranked list is the mean of the pairwise classification losses. This loss is inadequate for tasks like information retrieval where we prefer ranked lists with high precision on the top of the list. We propose to optimize a larger class of loss functions for ranking,… (More)

- Yann Guermeur, Christophe Geourjon, Patrick Gallinari, Gilbert Deléage
- Bioinformatics
- 1999

MOTIVATION
In many fields of pattern recognition, combination has proved efficient to increase the generalization performance of individual prediction methods. Numerous systems have been developed for protein secondary structure prediction, based on different principles. Finding better ensemble methods for this task may thus become crucial. Furthermore,… (More)

Features gathered from the observation of a phenomenon are not all equally informative: some of them may be noisy, correlated or irrelevant. Feature selection aims at selecting a feature set that is relevant for a given task. This problem is complex and remains an important issue in many domains. In the field of neural networks, feature selection has been… (More)

- Ioannis Partalas, Aris Kosmopoulos, +6 authors Patrick Gallinari
- ArXiv
- 2015

LSHTC is a series of challenges which aims to assess the performance of classification systems in large-scale classification in a a large number of classes (up to hundreds of thousands). This paper describes the dataset that have been released along the LSHTC series. The paper details the construction of the datsets and the design of the tracks as well as… (More)

We address the problem of designing sur-rogate losses for learning scoring functions in the context of label ranking. We extend to ranking problems a notion of order-preserving losses previously introduced for multiclass classification, and show that these losses lead to consistent formulations with respect to a family of ranking evaluation met-rics. An… (More)

- Tautvydas Cibas, Françoise Fogelman-Soulié, Patrick Gallinari, Sarunas Raudys
- Neurocomputing
- 1996

1. Introduction Neural Networks-NN-are used in quite a variety of real-world applications, where one can usually measure a potentially large number N of variables X i ; probably not all X i are equally informative: some should even be considered as noise to be eliminated. If one could select n << N "best" variables X i , then one could reduce the amount of… (More)