Collaborative Layer-Wise Discriminative Learning in Deep Neural Networks

@article{Jin2016CollaborativeLD,
  title={Collaborative Layer-Wise Discriminative Learning in Deep Neural Networks},
  author={Xiaojie Jin and Yunpeng Chen and Jian Dong and Jiashi Feng and Shuicheng Yan},
  journal={ArXiv},
  year={2016},
  volume={abs/1607.05440}
}
Intermediate features at different layers of a deep neural network are known to be discriminative for visual patterns of different complexities. However, most existing works ignore such cross-layer heterogeneities when classifying samples of different complexities. For example, if a training sample has already been correctly classified at a specific layer with high confidence, we argue that it is unnecessary to enforce rest layers to classify this sample correctly and a better strategy is to… 

Weakly-supervised Discriminative Patch Learning via CNN for Fine-grained Recognition

This work designs a novel asymmetric two-stream network architecture with supervision on convolutional filters and a nonrandom way of layer initialization that achieves state-of-the-art on two publicly available fine-grained recognition datasets.

Learning a Discriminative Filter Bank Within a CNN for Fine-Grained Recognition

This work shows that mid-level representation learning can be enhanced within the CNN framework, by learning a bank of convolutional filters that capture class-specific discriminative patches without extra part or bounding box annotations.

Learn to Combine Modalities in Multimodal Deep Learning

A novel deep neural network based technique that multiplicatively combines information from different source modalities to better capture cross-modal signal correlations and demonstrates the effectiveness of the proposed technique by presenting empirical results on three multimodal classification tasks from different domains.

Iterative neural networks for adaptive inference on resource-constrained devices

This work proposes a new architecture, which replaces the sequential layers with an iterative structure where weights are reused multiple times for a single input image, reducing the storage requirements drastically.

Learning Rotation-Invariant and Fisher Discriminative Convolutional Neural Networks for Object Detection

This work builds up the existing state-of-the-art object detection systems and proposes a simple but effective method to train rotation-invariant and Fisher discriminative CNN models to further boost object detection performance.

Multi-Path Feedback Recurrent Neural Networks for Scene Parsing

This paper considers the scene parsing problem and proposes a novel Multi-Path Feedback recurrent neural network (MPF-RNN), which can enhance the capability of RNNs in modeling long-range context information at multiple levels and better distinguish pixels that are easy to confuse.

Multi-level Factorisation Net for Person Re-identification

Multi-Level Factorisation Net (MLFN), a novel network architecture that factorises the visual appearance of a person into latent discriminative factors at multiple semantic levels without manual annotation, achieves state-of-the-art results on three Re-ID datasets, as well as compelling results on the general object categorisation CIFAR-100 dataset.

StackNet-DenVIS: a multi-layer perceptron stacked ensembling approach for COVID-19 detection using X-ray images

This research presents an approach to create a classifier model named StackNet-DenVIS which is designed to act as a screening process before conducting the existing swab tests for Covid-19 detection, using a novel approach which incorporates Transfer Learning and Stacked Generalization.

Fair contrastive pre-training for geographic images

This work considers fairness risks in land-cover semantic segmentation which uses pre-trained representation in contrastive self-supervised learning which achieves improved fairness results and outperforms state-of-the-art methods in terms of precision-fairness trade-off.

References

SHOWING 1-10 OF 52 REFERENCES

Training Deeper Convolutional Networks with Deep Supervision

A simple rule of thumb is formulated to determine where auxiliary supervision branches after certain intermediate layers during training are added in order to train deeper networks.

Learning Discriminative Features via Label Consistent Neural Network

This work proposes a supervised feature learning approach, Label Consistent Neural Network, which enforces direct supervision in late hidden layers in a novel way and introduces a label consistency regularization called "discriminative representation error" loss for latehidden layers and combines it with classification error loss to build the overall objective function.

Deeply-Supervised Nets

The proposed deeply-supervised nets (DSN) method simultaneously minimizes classification error while making the learning process of hidden layers direct and transparent, and extends techniques from stochastic gradient methods to analyze the algorithm.

Stochastic Pooling for Regularization of Deep Convolutional Neural Networks

We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly

Going deeper with convolutions

We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition

Discriminative Transfer Learning with Tree-based Priors

This work proposes a method for improving classification performance for high capacity classifiers by discovering similar classes and transferring knowledge among them, which learns to organize the classes into a tree hierarchy, and proposes an algorithm for learning the underlying tree structure.

Network In Network

With enhanced local modeling via the micro network, the proposed deep network structure NIN is able to utilize global average pooling over feature maps in the classification layer, which is easier to interpret and less prone to overfitting than traditional fully connected layers.

Reducing the Dimensionality of Data with Neural Networks

This work describes an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

Learning Activation Functions to Improve Deep Neural Networks

A novel form of piecewise linear activation function that is learned independently for each neuron using gradient descent is designed, achieving state-of-the-art performance on CIFar-10, CIFAR-100, and a benchmark from high-energy physics involving Higgs boson decay modes.
...