#### Filter Results:

- Full text PDF available (23)

#### Publication Year

1967

2017

- This year (3)
- Last 5 years (24)
- Last 10 years (27)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Aurélien Bellet, Amaury Habrard, Marc Sebban
- ArXiv
- 2013

The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult. This has led to the emergence of metric learning, which aims at automatically learning a metric from data and has attracted a… (More)

- Yuan Shi, Aurélien Bellet, Fei Sha
- AAAI
- 2014

We propose a new approach for metric learning by framing it as learning a sparse combination of locally discriminative metrics that are inexpensive to generate from the training data. This flexible framework allows us to naturally derive formulations for global, multi-task and local metric learning. The resulting algorithms have several advantages over… (More)

In this paper, we investigate how to scale up kernel methods to take on large-scale problems, on which deep neural networks have been prevailing. To this end, we leverage existing techniques and develop new ones. These techniques include approximating kernel functions with features derived from random projections, parallel training of kernel models with 100… (More)

- Aurélien Bellet, Amaury Habrard, Marc Sebban
- ICML
- 2012

In recent years, the crucial importance of metrics in machine learning algorithms has led to an increasing interest for optimizing distance and similarity functions. Most of the state of the art focus on learning Mahalanobis distances (requiring to fulfill a constraint of positive semi-definiteness) for use in a local k-NN algorithm. However, no theoretical… (More)

- Aurélien Bellet, Amaury Habrard
- Neurocomputing
- 2015

Metric learning has attracted a lot of interest over the last decade, but little work has been done about the generalization ability of such methods. In this paper, we address this issue by proposing an adaptation of the notion of algorithmic robustness, previously introduced by Xu and Mannor in classic supervised learning, to derive generalization bounds… (More)

Learning sparse combinations is a frequent theme in machine learning. In this paper, we study its associated optimization problem in the distributed setting where the elements to be combined are not centrally located but spread over a network. We address the key challenges of balancing communication costs and optimization errors. To this end, we propose a… (More)

- Aurélien Bellet, Amaury Habrard, Marc Sebban
- Machine Learning
- 2012

Similarity functions are a fundamental component of many learning algorithms. When dealing with string or tree-structured data, measures based on the edit distance are widely used, and there exist a few methods for learning them from data. However, these methods offer no theoretical guarantee as to the generalization ability and discriminative power of the… (More)

- Aurélien Bellet, Amaury Habrard, Marc Sebban
- Metric Learning
- 2015

- Igor Colin, Aurélien Bellet, Joseph Salmon, Stéphan Clémençon
- ICML
- 2016

In decentralized networks (of sensors, connected objects, etc.), there is an important need for efficient algorithms to optimize a global cost function, for instance to learn a global model from the local data collected by each computing unit. In this paper, we address the problem of decentralized minimization of pairwise functions of the data points, where… (More)

- Kuan Liu, Aurélien Bellet, Fei Sha
- AISTATS
- 2015

A good measure of similarity between data points is crucial to many tasks in machine learning. Similarity and metric learning methods learn such measures automatically from data, but they do not scale well respect to the dimensionality of the data. In this paper, we propose a method that can learn efficiently similarity measure from highdimensional sparse… (More)