• Publications
  • Influence
A Closer Look at Few-shot Classification
TLDR
The results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones, and a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms. Expand
On Convergence and Stability of GANs
TLDR
This work proposes studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions, and shows that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions. Expand
Self-Monitoring Navigation Agent via Auxiliary Progress Estimation
TLDR
A self-monitoring agent with two complementary components: (1) visual-textual co-grounding module to locate the instruction completed in the past, the instruction required for the next action, and the next moving direction from surrounding images and (2) progress monitor to ensure the grounded instruction correctly reflects the navigation progress. Expand
Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data
TLDR
This work bases its work on a popular method ODIN, proposing two strategies for freeing it from the needs of tuning with OoD data, while improving its OoD detection performance, and proposing to decompose confidence scoring as well as a modified input pre-processing method. Expand
Learning to cluster in order to Transfer across domains and tasks
TLDR
A novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster with state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet. Expand
Temporal Attentive Alignment for Large-Scale Video Domain Adaptation
TLDR
This work proposes Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets. Expand
Multi-class Classification without Multi-class Labels
This work presents a new strategy for multi-class classification that requires no class-specific labels, but instead leverages pairwise similarity between examples, which is a weaker form ofExpand
Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines
TLDR
The results provide an understanding of the relative difficulty of the scenarios and that simple baselines (Adagrad, L2 regularization, and naive rehearsal strategies) can surprisingly achieve similar performance to current mainstream methods. Expand
The Regretful Agent: Heuristic-Aided Navigation Through Progress Estimation
TLDR
This paper proposes to use a progress monitor developed in prior work as a learnable heuristic for search, and proposes two modules incorporated into an end-to-end architecture that significantly outperforms current state-of-the-art methods using greedy action selection. Expand
How to Train Your DRAGAN
TLDR
This paper introduces regret minimization as a technique to reach equilibrium in games and uses this to justify the success of simultaneous GD in GANs and develops an algorithm called DRAGAN that is fast, simple to implement and achieves competitive performance in a stable fashion. Expand
...
1
2
3
4
5
...