Giant Panda Identification

  title={Giant Panda Identification},
  author={Le Wang and Rizhi Ding and Yuanhao Zhai and Qilin Zhang and Wei Tang and Nanning Zheng and Gang Hua},
  journal={IEEE Transactions on Image Processing},
  • Le WangRizhi Ding G. Hua
  • Published 4 February 2021
  • Computer Science
  • IEEE Transactions on Image Processing
The lack of automatic tools to identify giant panda makes it hard to keep track of and manage giant pandas in wildlife conservation missions. In this paper, we introduce a new Giant Panda Identification (GPID) task, which aims to identify each individual panda based on an image. Though related to the human re-identification and animal classification problem, GPID is extraordinarily challenging due to subtle visual differences between pandas and cluttered global information. In this paper, we… 

Fine-Grained Butterfly Recognition via Peer Learning Network with Distribution-Aware Penalty Mechanism

A peer learning network with a distribution-aware penalty mechanism proposed to learn discriminative feature representation and mitigate the bias and variance problems in the long-tailed distribution is proposed to address fine-grained species recognition.



Fine-Grained Giant Panda Identification

The Feature-Fusion Convolutional Neural Network with Patch Detector (FFCNN-PD) algorithm is proposed, which exploits the discriminative local patches and builds a hierarchical representation generated by fusing both global and local features.

Giant Panda Face Recognition Using Small Dataset

A panda face recognition algorithm, which includes alignment, large feature set extraction and matching is proposed and evaluated on a dataset consisting of 163 images, and the experimental results are encouraging.

Feature Pyramid Networks for Object Detection

This paper exploits the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost and achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles.

HyperNet: Towards Accurate Region Proposal Generation and Joint Object Detection

This paper presents a deep hierarchical network, namely HyperNet, for handling region proposal generation and object detection jointly, primarily based on an elaborately designed Hyper Feature which aggregates hierarchical feature maps first and then compresses them into a uniform space.

Holistically-Nested Edge Detection

  • Saining XieZ. Tu
  • Computer Science
    2015 IEEE International Conference on Computer Vision (ICCV)
  • 2015
HED performs image-to-image prediction by means of a deep learning model that leverages fully convolutional neural networks and deeply-supervised nets, and automatically learns rich hierarchical representations that are important in order to resolve the challenging ambiguity in edge and object boundary detection.

Multiple Granularity Descriptors for Fine-Grained Categorization

This work leverages the fact that a subordinate-level object already has other labels in its ontology tree to train a series of CNN-based classifiers, each specialized at one grain level, which outperforms state-of-the-art algorithms, including those requiring strong labels.

Scalable Person Re-identification: A Benchmark

A minor contribution, inspired by recent advances in large-scale image search, an unsupervised Bag-of-Words descriptor is proposed that yields competitive accuracy on VIPeR, CUHK03, and Market-1501 datasets, and is scalable on the large- scale 500k dataset.

AutoBD: Automated Bi-Level Description for Scalable Fine-Grained Visual Categorization

A robust and discriminative visual description named Automated Bi-level Description (AutoBD), which only requires the image-level labels of training images and does not need any annotations for testing images, which makes AutoBD suitable for large-scale visual categorization.

Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation

This paper proposes a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%.