Corpus ID: 57573751

Transductive Zero-Shot Learning with Visual Structure Constraint

@inproceedings{Wan2019TransductiveZL,
  title={Transductive Zero-Shot Learning with Visual Structure Constraint},
  author={Ziyu Wan and Dongdong Chen and Yan Li and Xingguang Yan and Junge Zhang and Yizhou Yu and Jing Liao},
  booktitle={NeurIPS},
  year={2019}
}
Zero-shot Learning (ZSL) aims to recognize objects of the unseen classes, whose instances may not have been seen during training. [...] Key Method Based on the observation that visual features of test instances can be separated into different clusters, we propose a visual structure constraint on class centers for transductive ZSL, to improve the generality of the projection function (i.e. alleviate the above domain shift problem).Expand
Visual Structure Constraint for Transductive Zero-Shot Learning in the Wild
TLDR
This work proposes a new visual structure constraint on class centers for transductive ZSL, to improve the generality of the projection function and proposes two new training strategies to handle the data in the wild, where many unrelated images in the test dataset may exist. Expand
Attribute-Induced Bias Eliminating for Transductive Zero-Shot Learning
TLDR
A novel Attribute-Induced Bias Eliminating (AIBE) module for Transductive ZSL is proposed, which reduces the semantic bias between seen and unseen categories and an unseen semantic alignment constraint is designed to align visual and semantic space in an unsupervised manner. Expand
Deep transductive network for generalized zero shot learning
TLDR
A novel explainable Deep Transductive Network (DTN) for the task of Generalized ZSL (GZSL) by training on both labeled seen data and unlabeled unseen data, with subsequent testing on both seen classes and unseen classes. Expand
Transductive Zero-Shot Learning by Decoupled Feature Generation
TLDR
This paper trains an unconditional generator to solely capture the complexity of the distribution of visual data and subsequently pair it with a conditional generator devoted to enrich the prior knowledge of the data distribution with the semantic content of the class embeddings, demonstrating its superiority over the related state-of-the-art paradigms. Expand
Transductive Zero-Shot Learning using Cross-Modal CycleGAN
TLDR
A new model for T-ZSL based upon CycleGAN jointly projects images on their seen class labels with a supervised objective and aligns unseen class labels and visual exemplars with adversarial and cycle-consistency objectives and shows the efficiency of the Cross-Modal CycleGAN model (CM-GAN) on the ImageNet T- ZSL task where it obtain state-of-the-art results. Expand
Structure-Aware Feature Generation for Zero-Shot Learning
TLDR
This paper introduces a novel structure-aware feature generation scheme, termed as SA-GAN, to explicitly account for the topological structure in learning both the latent space and the generative networks, and introduces a constraint loss to preserve the initial geometric structure when learning a discriminative latent space. Expand
Domain-aware Stacked AutoEncoders for zero-shot learning
TLDR
A novel model, named Domain-aware Stacked AutoEncoders (DaSAE), that consists of two interactive stacked auto-encoders to learn the domain-aware projections for adapting source and target domains respectively is proposed. Expand
Zero-VAE-GAN: Generating Unseen Features for Generalized and Transductive Zero-Shot Learning
  • R. Gao, X. Hou, +5 authors L. Shao
  • Medicine, Computer Science
  • IEEE Transactions on Image Processing
  • 2020
TLDR
A joint generative model that couples variational autoencoder and generative adversarial network, called Zero-VAE-GAN, is proposed to generate high-quality unseen features and an adversarial categorization network is incorporated into the joint framework to enhance the class-level discriminability. Expand
Semantic-Guided Multi-Attention Localization for Zero-Shot Learning
TLDR
A semantic-guided multi-attention localization model, which automatically discovers the most discriminative parts of objects for zero-shot learning without any human annotations is proposed, which improves the state-of-the-art results by a considerable margin. Expand
Enhancing Generalized Zero-Shot Learning via Adversarial Visual-Semantic Interaction
TLDR
This work proposes a new two-level joint maximization idea to augment the generative network with an inference network during training which helps the model capture the several modes of the data and generate features that better represent the underlying data distribution. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 60 REFERENCES
Transductive Unbiased Embedding for Zero-Shot Learning
TLDR
This paper proposes a straightforward yet effective method named Quasi-Fully Supervised Learning (QFSL) to alleviate the bias problem in Zero-Shot Learning, which outperforms existing state-of-the-art approaches by a huge margin. Expand
Synthesized Classifiers for Zero-Shot Learning
TLDR
This work introduces a set of "phantom" object classes whose coordinates live in both the semantic space and the model space and demonstrates superior accuracy of this approach over the state of the art on four benchmark datasets for zero-shot learning. Expand
Zero-Shot Learning via Semantic Similarity Embedding
In this paper we consider a version of the zero-shot learning problem where seen class source and target domain data are provided. The goal during test-time is to accurately predict the class labelExpand
Semantic Autoencoder for Zero-Shot Learning
TLDR
This work presents a novel solution to ZSL based on learning a Semantic AutoEncoder (SAE), which outperforms significantly the existing ZSL models with the additional benefit of lower computational cost and beats the state-of-the-art when the SAE is applied to supervised clustering problem. Expand
Generalized Zero-Shot Learning with Deep Calibration Network
TLDR
This paper proposes a novel Deep Calibration Network (DCN) approach towards this generalized zero-shot learning paradigm, which enables simultaneous calibration of deep networks on the confidence of source classes and uncertainty of target classes. Expand
Zero-Shot Learning via Joint Latent Similarity Embedding
TLDR
A joint discriminative learning framework based on dictionary learning is developed to jointly learn the parameters of the model for both domains, which ultimately leads to a class-independent classifier that shows 4.90% improvement over the state-of-the-art in accuracy averaged across four benchmark datasets. Expand
Zero-Shot Recognition Using Dual Visual-Semantic Mapping Paths
TLDR
A novel framework for zero-shot recognition is proposed that can not only apply prior semantic knowledge to infer underlying semantic manifold in the image feature space, but also generate optimized semantic embedding space, which can enhance the transfer ability of the visual-semantic mapping to unseen classes. Expand
Transductive Zero-Shot Recognition via Shared Model Space Learning
TLDR
The results demonstrates that the proposed SMS can significantly outperform the state-of-the-art related approaches which validates its efficacy for the ZSR task. Expand
Transductive Multi-view Embedding for Zero-Shot Recognition and Annotation
TLDR
This paper proposes a novel framework, transductive multi-view embedding, that rectifies the projection shift between the auxiliary and target domains, exploits the complementarity of multiple semantic representations, achieves state-of-the-art recognition results on image and video benchmark datasets, and enables novel cross-view annotation tasks. Expand
Semantic-Guided Multi-Attention Localization for Zero-Shot Learning
TLDR
A semantic-guided multi-attention localization model, which automatically discovers the most discriminative parts of objects for zero-shot learning without any human annotations is proposed, which improves the state-of-the-art results by a considerable margin. Expand
...
1
2
3
4
5
...