• Corpus ID: 235755164

Novel Visual Category Discovery with Dual Ranking Statistics and Mutual Knowledge Distillation

@article{Zhao2021NovelVC,
  title={Novel Visual Category Discovery with Dual Ranking Statistics and Mutual Knowledge Distillation},
  author={Bingchen Zhao and K. Han},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.03358}
}
In this paper, we tackle the problem of novel visual category discovery, i.e., grouping unlabelled images from new classes into different semantic partitions by leveraging a labelled dataset that contains images from other different but relevant categories. This is a more realistic and challenging setting than conventional semi-supervised learning. We propose a two-branch learning framework for this problem, with one branch focusing on local part-level information and the other branch focusing… 

A Simple Parametric Classification Baseline for Generalized Category Discovery

It is concluded that the less discriminative representations and unreliable pseudo-labelling strategy are key factors that make parametric classifiers lag behind non-parametric ones.

Modeling Inter-Class and Intra-Class Constraints in Novel Class Discovery

This paper proposes to model both inter-class and intra- class constraints in NCD based on the symmetric Kullback-Leibler divergence (sKLD), and presents an intra-class sKLD constraint to explicitly constrain the intra-relationship between samples and their augmentations and ensure the stability of the training process at the same time.

Open-world Contrastive Learning

This paper introduces a new learning framework, open-world contrastive learning (OpenCon), which tackles the challenges of learning compact representations for both known and novel classes, and facilitates novelty discovery along the way.

XCon: Learning with Experts for Fine-grained Category Discovery

A novel method called Expert-Contrastive Learning (XCon) is presented to help the model to mine useful information from the images by partitioning the dataset into sub-datasets using k -means clustering and then performing contrastive learning on each of the sub- datasets to learn fine-grained discriminative features.

Spacing Loss for Discovering Novel Categories

This work characterize existing NCD approaches into singlestage and two-stage methods based on whether they require access to labeled and unlabeled data together while discovering new classes and devise a simple yet powerful loss function that enforces separability in the latent space using cues from multi-dimensional scaling, which is referred to as Spacing Loss.

Learning to Discover and Detect Objects

A two-stage object detection network Region-based NCDL, that uses a region proposal network to localize object candidates and is trained to classify each candidate, either as one of the known classes, seen in the source dataset, or one ofThe extended set of novel classes, with a long-tail distribution constraint on the class assignments.

Grow and Merge: A Unified Framework for Continuous Categories Discovery

A framework of Grow and Merge that works by alternating between a growing phase and a merging phase that increases the diversity of features through a continuous self-supervised learning for effective category mining and merges the grown model with a static one to ensure satisfying performance for known classes is developed.

Discovering Novel Categories in Sar Images in Open Set Conditions

A multi-stage approach to effectively pick out images belonging to new classes in another unlabelled dataset and then cluster them into correct number of novel categories for Synthetic Aperture Radar (SAR) images under open-set conditions is proposed.

A Closer Look at Novel Class Discovery from the Labeled Set

This paper proposes and substantiates the hypothesis that NCD could benefit more from a labeled set with a large degree of semantic similarity to the unlabeled set, and introduces a mathematical definition for quantifying the semantic similarity between labeled and unlabeling sets.

MDFlow: Unsupervised Optical Flow Learning by Reliable Mutual Knowledge Distillation

  • Lingtong KongJ. Yang
  • Computer Science
    IEEE Transactions on Circuits and Systems for Video Technology
  • 2022
This work proposes a novel mutual distillation framework to transfer reliable knowledge back and forth between the teacher and student networks for alternate improvement and achieves state-of-the-art real-time accuracy and generalization ability on challenging benchmarks.

References

SHOWING 1-10 OF 60 REFERENCES

ImageNet: A large-scale hierarchical image database

A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.

Open-World Semi-Supervised Learning

Despite solving the harder task ORCA outperforms semisupervised methods on seen classes, as well as novel class discovery methods on novel classes, achieving 7% and 151% improvements on seen and novel classes in the ImageNet dataset.

SEED: Self-supervised Distillation For Visual Representation

This paper proposes a new learning paradigm, named SElf-SupErvised Distillation (SEED), where a larger network is leverage to transfer its representational knowledge into a smaller architecture in a self-supervised fashion, and shows that SEED dramatically boosts the performance of small networks on downstream tasks.

Momentum Contrast for Unsupervised Visual Representation Learning

We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a

Learning Deep Features for Discriminative Localization

In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability

3D Object Representations for Fine-Grained Categorization

This paper lifts two state-of-the-art 2D object representations to 3D, on the level of both local feature appearance and location, and shows their efficacy for estimating 3D geometry from images via ultra-wide baseline matching and 3D reconstruction.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

Caltech-UCSD Birds 200

Caltech-UCSD Birds 200 (CUB-200) is a challenging image dataset annotated with 200 bird species. It was created to enable the study of subordinate categorization, which is not possible with other

Automatically Discovering and Learning New Visual Categories with Ranking Statistics

This work suggests that the common approach of bootstrapping an image representation using the labeled data only introduces an unwanted bias, and that this can be avoided by using self-supervised learning to train the representation from scratch on the union of labelled and unlabelled data.

Learning to Discover Novel Visual Categories via Deep Transfer Clustering

The problem of discovering novel object categories in an image collection is considered, and Deep Embedded Clustering is extended to a transfer learning setting, and the algorithm is improved by introducing a representation bottleneck, temporal ensembling, and consistency.
...