• Publications
  • Influence
Geodesic flow kernel for unsupervised domain adaptation
TLDR
This paper proposes a new kernel-based method that takes advantage of low-dimensional structures that are intrinsic to many vision datasets, and introduces a metric that reliably measures the adaptability between a pair of source and target domains.
Marginalized Denoising Autoencoders for Domain Adaptation
TLDR
The approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters--in fact, they are computed in closed-form, significantly speeds up SDAs by two orders of magnitude.
Shallow Parsing with Conditional Random Fields
TLDR
This work shows how to train a conditional random field to achieve performance as good as any reported base noun-phrase chunking method on the CoNLL task, and better than any reported single model.
Synthesized Classifiers for Zero-Shot Learning
TLDR
This work introduces a set of "phantom" object classes whose coordinates live in both the semantic space and the model space and demonstrates superior accuracy of this approach over the state of the art on four benchmark datasets for zero-shot learning.
Video Summarization with Long Short-Term Memory
TLDR
Long Short-Term Memory (LSTM), a special type of recurrent neural networks are used to model the variable-range dependencies entailed in the task of video summarization to improve summarization by reducing the discrepancies in statistical properties across those datasets.
Actor-Attention-Critic for Multi-Agent Reinforcement Learning
TLDR
This work presents an actor-critic algorithm that trains decentralized policies in multi-agent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep, which enables more effective and scalable learning in complex multi- agent environments, when compared to recent approaches.
Learning a kernel matrix for nonlinear dimensionality reduction
TLDR
This work investigates how to learn a kernel matrix for high dimensional data that lies on or near a low dimensional manifold and shows how to discover a mapping that "unfolds" the underlying manifold from which the data was sampled.
An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild
TLDR
It is shown that there is a large gap between the performance of existing approaches and the performance limit of GZSL, suggesting that improving the quality of class semantic embeddings is vital to improving ZSL.
Deformable Spatial Pyramid Matching for Fast Dense Correspondences
TLDR
This work introduces a fast deformable spatial pyramid (DSP) matching algorithm for computing dense pixel correspondences that simultaneously regularizes match consistency at multiple spatial extents-ranging from an entire image, to coarse grid cells, to every single pixel.
Few-Shot Learning via Embedding Adaptation With Set-to-Set Functions
TLDR
This paper proposes a novel approach to adapt the instance embeddings to the target classification task with a set-to-set function, yielding embeddeddings that are task-specific and are discriminative.
...
1
2
3
4
5
...