• Corpus ID: 14927886

Deep Collaborative Learning for Visual Recognition

@article{Wang2017DeepCL,
  title={Deep Collaborative Learning for Visual Recognition},
  author={Yan Wang and Lingxi Xie and Ya Zhang and Wenjun Zhang and Alan Loddon Yuille},
  journal={ArXiv},
  year={2017},
  volume={abs/1703.01229}
}
Deep neural networks are playing an important role in state-of-the-art visual recognition. To represent high-level visual concepts, modern networks are equipped with large convolutional layers, which use a large number of filters and contribute significantly to model complexity. For example, more than half of the weights of AlexNet are stored in the first fully-connected layer (4,096 filters). We formulate the function of a convolutional layer as learning a large visual vocabulary, and propose… 

Figures and Tables from this paper

LC-DECAL: Label Consistent Deep Collaborative Learning for Face Recognition

The proposed Label Consistent Deep Collaborative Learning (LC-DECAL) framework makes use of label consistency, transfer learning, ensemble learning, and co-training for training a deep neural network for the target domain.

SORT: Second-Order Response Transform for Visual Recognition

A novel approach named Second-Order Response Transform (SORT), which appends element-wise product transform to the linear sum of a two-branch network module, which augments the family of transform operations and increases the nonlinearity of the network.

Self-Supervised and Collaborative Learning

A collaborative learning framework, built upon transfer learning and co-training, incorporation of label consistency in proposed framework to learn discriminative features, a knowledge transfer framework to combine knowledge of multiple models trained via self-supervised learning to train a supervised network, and a Generative Adversarial Network (GAN) based approach.

References

SHOWING 1-10 OF 52 REFERENCES

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

ImageNet classification with deep convolutional neural networks

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

Recurrent convolutional neural network for object recognition

  • Ming LiangXiaolin Hu
  • Computer Science
    2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2015
With fewer trainable parameters, RCNN outperforms the state-of-the-art models on all of these datasets and demonstrates the advantage of the recurrent structure over purely feed-forward structure for object recognition.

Network In Network

With enhanced local modeling via the micro network, the proposed deep network structure NIN is able to utilize global average pooling over feature maps in the classification layer, which is easier to interpret and less prone to overfitting than traditional fully connected layers.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Going deeper with convolutions

We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree

The proposed pooling operations provide a boost in invariance properties relative to conventional pooling and set the state of the art on several widely adopted benchmark datasets; they are also easy to implement, and can be applied within various deep neural network architectures.

Bilinear CNN Models for Fine-Grained Visual Recognition

We propose bilinear models, a recognition architecture that consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an
...