B-CNN: Branch Convolutional Neural Network for Hierarchical Classification
@article{Zhu2017BCNNBC, title={B-CNN: Branch Convolutional Neural Network for Hierarchical Classification}, author={Xinqi Zhu and Michael Bain}, journal={ArXiv}, year={2017}, volume={abs/1709.09890} }
Convolutional Neural Network (CNN) image classifiers are traditionally designed to have sequential convolutional layers with a single output layer. [] Key Method A B-CNN model outputs multiple predictions ordered from coarse to fine along the concatenated convolutional layers corresponding to the hierarchical structure of the target classes, which can be regarded as a form of prior knowledge on the output.
Figures and Tables from this paper
95 Citations
CNN with coarse-to-fine layer for hierarchical classification
- Computer ScienceIET Comput. Vis.
- 2018
A novel hierarchical CNN architecture, with a proposed coarse-to-fine layer on the top of a generic CNN, inspired by the Bayesian equation, which can be optimised by typical stochastic gradient descent.
Combined Convolutional and Recurrent Neural Networks for Hierarchical Classification of Images
- Computer Science2020 IEEE International Conference on Big Data (Big Data)
- 2020
H hierarchical classification models combining a CNN to extract hierarchical representations of images, and an RNN or sequence-to-sequence model to capture a hierarchical tree of classes are proposed.
Hierarchical Auxiliary Learning
- Computer ScienceMach. Learn. Sci. Technol.
- 2020
This paper introduces an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification/recognition errors with a reasonable number of classes, and adds the auxiliary block between the last residual block and the fully-connected output layer of the ResNet.
Grafting heterogeneous neural networks for a hierarchical object classification
- Computer ScienceIEEE Access
- 2022
This work proposes a strategy that allows the merging of heterogeneous CNNs by following a hierarchical approach in which the information extracted by first- level networks can be fed back at any location into second-level networks, and eliminates the computational redundancy induced by the recalculation of low-level features.
Semantic Hierarchy-based Convolutional Neural Networks for Image Classification
- Computer Science2020 International Joint Conference on Neural Networks (IJCNN)
- 2020
Three variations of hierarchical topologies of Convolutional Neural Networks, two of which being original proposals introduced by this work, were tested to assess their impact on image classification problems and suggest that providing semantic hierarchies can improve fine level accuracy on CNNs.
CF-CNN: Coarse-to-Fine Convolutional Neural Network
- Computer ScienceApplied Sciences
- 2021
The proposed CF-CNN is a disjoint grouping method that first creates a class group with hierarchical association, and then assigns a new label to a class belonging to each group so that each class acquires multiple labels.
Visual Tree Convolutional Neural Network in Image Classification
- Computer Science2018 24th International Conference on Pattern Recognition (ICPR)
- 2018
A Confusion Visual Tree based on the confused semantic level information to identify the confused categories to lead the CNN training procedure to pay more attention on these confused categories is proposed.
Condition-CNN: A hierarchical multi-label fashion image classification model
- Computer ScienceExpert Syst. Appl.
- 2021
Hierarchical bilinear convolutional neural network for image classification
- Computer ScienceIET Comput. Vis.
- 2021
A multi‐task learning framework, named as Hierarchical Bilinear Convolutional Neural Network (HB‐CNN), is developed by seamlessly integrating CNNs with multi‐ task learning over the hierarchical visual concept structures by using labels with a tree structure as the supervision to hierarchically train multiple branch networks.
A Mask-based Output Layer for Multi-level Hierarchical Classification
- Computer ScienceProceedings of the 31st ACM International Conference on Information & Knowledge Management
- 2022
A model agnostic output layer that embeds the taxonomy and that can be combined with any model is proposed that allows to improve several multi-level hierarchical classification models using various performance metrics.
References
SHOWING 1-10 OF 36 REFERENCES
HD-CNN: Hierarchical Deep Convolutional Neural Networks for Large Scale Visual Recognition
- Computer Science2015 IEEE International Conference on Computer Vision (ICCV)
- 2015
This paper introduces hierarchical deepCNNs (HD-CNNs) by embedding deep CNNs into a two-level category hierarchy and achieves state-of-the-art results on both CIFAR100 and large-scale ImageNet 1000-class benchmark datasets.
Striving for Simplicity: The All Convolutional Net
- Computer ScienceICLR
- 2015
It is found that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks.
Network In Network
- Computer ScienceICLR
- 2014
With enhanced local modeling via the micro network, the proposed deep network structure NIN is able to utilize global average pooling over feature maps in the classification layer, which is easier to interpret and less prone to overfitting than traditional fully connected layers.
ImageNet classification with deep convolutional neural networks
- Computer ScienceCommun. ACM
- 2012
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Discriminative Transfer Learning with Tree-based Priors
- Computer ScienceNIPS
- 2013
This work proposes a method for improving classification performance for high capacity classifiers by discovering similar classes and transferring knowledge among them, which learns to organize the classes into a tree hierarchy, and proposes an algorithm for learning the underlying tree structure.
Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree
- Computer ScienceAISTATS
- 2016
The proposed pooling operations provide a boost in invariance properties relative to conventional pooling and set the state of the art on several widely adopted benchmark datasets; they are also easy to implement, and can be applied within various deep neural network architectures.
Learning Multiple Layers of Features from Tiny Images
- Computer Science
- 2009
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.
Deep Residual Learning for Image Recognition
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2018
This work addresses the task of semantic image segmentation with Deep Learning and proposes atrous spatial pyramid pooling (ASPP), which is proposed to robustly segment objects at multiple scales, and improves the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models.
Very Deep Convolutional Networks for Large-Scale Image Recognition
- Computer ScienceICLR
- 2015
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.