#### Filter Results:

- Full text PDF available (219)

#### Publication Year

1985

2017

- This year (2)
- Last five years (84)

#### Publication Type

#### Co-author

#### Publication Venue

#### Brain Region

#### Data Set Used

#### Key Phrases

#### Method

#### Organism

Learn More

- Ludovic Denoyer, Patrick Gallinari
- SIGIR Forum
- 2006

Wikipedia is a well known free content, multilingual encyclopedia written collaboratively by contributors around the world. Anybody can edit an article using a wiki markup language that offers a simplified alternative to HTML. This encyclopedia is composed of millions of articles in different languages.

- Antoine Bordes, Léon Bottou, Patrick Gallinari
- Journal of Machine Learning Research
- 2009

The SGD-QN algorithm is a stochastic gradient descent algorithm that makes careful use of second-order information and splits the parameter update into independently scheduled components. Thanks to this design, SGD-QN iterates nearly as fast as a first-order stochastic gradient descent but requires less iterations to achieve the same accuracy. This… (More)

- Antoine Bordes, Léon Bottou, Patrick Gallinari, Jason Weston
- ICML
- 2007

Optimization algorithms for large margin multiclass recognizers are often too costly to handle ambitious problems with structured outputs and exponential numbers of classes. Optimization algorithms that rely on the full gradient are not effective because, unlike the solution, the gradient is not sparse and is very large. The LaRank algorithm sidesteps this… (More)

Features gathered from the observation of a phenomenon are not all equally informative: some of them may be noisy, correlated or irrelevant. Feature selection aims at selecting a feature set that is relevant for a given task. This problem is complex and remains an important issue in many domains. In the field of neural networks, feature selection has been… (More)

- Tautvydas Cibas, Françoise Fogelman-Soulié, Patrick Gallinari, Sarunas Raudys
- Neurocomputing
- 1996

1. Introduction Neural Networks-NN-are used in quite a variety of real-world applications, where one can usually measure a potentially large number N of variables X i ; probably not all X i are equally informative: some should even be considered as noise to be eliminated. If one could select n << N "best" variables X i , then one could reduce the amount of… (More)

- Nicolas Usunier, David Buffoni, Patrick Gallinari
- ICML
- 2009

In ranking with the pairwise classification approach, the loss associated to a predicted ranked list is the mean of the pairwise classification losses. This loss is inadequate for tasks like information retrieval where we prefer ranked lists with high precision on the top of the list. We propose to optimize a larger class of loss functions for ranking,… (More)

- Ioannis Partalas, Aris Kosmopoulos, +6 authors Patrick Gallinari
- ArXiv
- 2015

LSHTC is a series of challenges which aims to assess the performance of classification systems in large-scale classification in a a large number of classes (up to hundreds of thousands). This paper describes the dataset that have been released along the LSHTC series. The paper details the construction of the datsets and the design of the tracks as well as… (More)

We address the problem of designing sur-rogate losses for learning scoring functions in the context of label ranking. We extend to ranking problems a notion of order-preserving losses previously introduced for multiclass classification, and show that these losses lead to consistent formulations with respect to a family of ranking evaluation met-rics. An… (More)

- Ludovic Denoyer, Patrick Gallinari
- Inf. Process. Manage.
- 2004

Recently, a new community has started to emerge around the development of new information research methods for searching and analyzing semi-structured and XML like documents. The goal is to handle both content and structural information, and to deal with different types of information content (text, image, etc.). We consider here the task of structured… (More)

This paper presents an approach to multilabel classification (MLC) with a large number of labels. Our approach is a reduction to binary classification in which label sets are represented by low dimensional binary vectors. This representation follows the principle of Bloom filters, a space-efficient data structure originally designed for approximate… (More)