• Publications
  • Influence
Deep Subspace Clustering Networks
TLDR
The key idea is to introduce a novel self-expressive layer between the encoder and the decoder to mimic the "self-expressiveness" property that has proven effective in traditional subspace clustering. Expand
Context-Aware Crowd Counting
TLDR
This paper introduces an end-to-end trainable deep architecture that combines features obtained using multiple receptive field sizes and learns the importance of each such feature at each image location, which yields an algorithm that outperforms state-of-the-art crowd counting methods, especially when perspective effects are strong. Expand
Unsupervised Domain Adaptation by Domain Invariant Projection
TLDR
This paper learns a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized and demonstrates the effectiveness of the approach on the task of visual object recognition. Expand
Learning Trajectory Dependencies for Human Motion Prediction
TLDR
A simple feed-forward deep network for motion prediction, which takes into account both temporal smoothness and spatial dependencies among human body joints, and design a new graph convolutional network to learn graph connectivity automatically. Expand
Learning to Find Good Correspondences
TLDR
A novel normalization technique, called Context Normalization, is introduced, which allows the network to process each data point separately while embedding global information in it, and also makes the network invariant to the order of the correspondences. Expand
Kernel Methods on the Riemannian Manifold of Symmetric Positive Definite Matrices
TLDR
To encode the geometry of the manifold in the mapping, a family of provably positive definite kernels on the Riemannian manifold of SPD matrices is introduced, derived from the Gaussian kernel, but exploit different metrics on the manifold. Expand
Learning the Number of Neurons in Deep Networks
TLDR
This paper proposes to make use of a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron, and shows that this approach can reduce the number of parameters by up to 80\% while retaining or even improving the network accuracy. Expand
Convex Optimization for Deformable Surface 3-D Tracking
TLDR
This work represents surfaces as triangulated meshes and, assuming the pose in the first frame to be known, disallow large changes of edge orientation between consecutive frames, which is a generally applicable constraint when tracking surfaces in a 25 frames- per-second video sequence. Expand
Factorized Latent Spaces with Structured Sparsity
TLDR
This paper shows that structured sparsity allows us to address the multi-view learning problem by alternately solving two convex optimization problems and shows that the resulting factorized latent spaces generalize over existing approaches in that they allow having latent dimensions shared between any subset of the views instead of between all the views only. Expand
Linear Local Models for Monocular Reconstruction of Deformable Surfaces
  • M. Salzmann, P. Fua
  • Mathematics, Computer Science
  • IEEE Transactions on Pattern Analysis and Machine…
  • 1 May 2011
TLDR
This paper replaces the global models by linear local models for surface patches, which can be assembled to represent arbitrary surface shapes as long as they are made of the same material. Expand
...
1
2
3
4
5
...