#### Filter Results:

- Full text PDF available (45)

#### Publication Year

2006

2017

- This year (5)
- Last 5 years (28)
- Last 10 years (45)

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

- Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf
- 2014 IEEE Conference on Computer Vision and…
- 2014

In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network.… (More)

- Jeffrey Dean, Gregory S. Corrado, +9 authors Andrew Y. Ng
- NIPS
- 2012

Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize… (More)

- Kevin Jarrett, Koray Kavukcuoglu, Marc'Aurelio Ranzato, Yann LeCun
- 2009 IEEE 12th International Conference on…
- 2009

In many recent object recognition systems, feature extraction stages are generally composed of a filter bank, a non-linear transformation, and some sort of feature pooling layer. Most systems use only one stage of feature extraction in which the filters are hard-wired, or two stages where the filters in one or both stages are learned in supervised or… (More)

- Quoc V. Le, Marc'Aurelio Ranzato, +5 authors Andrew Y. Ng
- 2013 IEEE International Conference on Acoustics…
- 2012

We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel… (More)

We describe a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a spar-sifying non-linearity that turns a code vector into a quasi-binary sparse code vector. Given an input, the optimal code minimizes the distance between the output of the decoder and the input patch while… (More)

- Marc'Aurelio Ranzato, Fu Jie Huang, Y-Lan Boureau, Yann LeCun
- 2007 IEEE Conference on Computer Vision and…
- 2007

We present an unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions. The resulting feature extractor consists of multiple convolution filters, followed by a feature-pooling layer that computes the max of each filter output within adjacent windows, and a point-wise sigmoid non-linearity. A… (More)

- Marc'Aurelio Ranzato, Geoffrey E. Hinton
- 2010 IEEE Computer Society Conference on Computer…
- 2010

Learning a generative model of natural images is a useful way of extracting features that capture interesting regularities. Previous work on learning such models has focused on methods in which the latent features are used to determine the mean and variance of each pixel independently, or on methods in which the hidden units determine the covariance matrix… (More)

- Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, Wojciech Zaremba
- ArXiv
- 2015

Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as… (More)

- Marc'Aurelio Ranzato, Y-Lan Boureau, Yann LeCun
- NIPS
- 2007

Unsupervised learning algorithms aim to discover the structure hidden in the data, and to learn representations that are more suitable as input to a supervised machine than the raw input. Many unsupervised methods are based on reconstructing the input from the representation, while constraining the representation to have certain desirable properties (e.g.… (More)

- Koray Kavukcuoglu, Marc'Aurelio Ranzato, Yann LeCun
- ArXiv
- 2008

Adaptive sparse coding methods learn a possibly overcomplete set of basis functions, such that natural image patches can be reconstructed by linearly combining a small subset of these bases. The applicability of these methods to visual object recognition tasks has been limited because of the prohibitive cost of the optimization algorithms required to… (More)