Neural Dataset Generality

@article{Venkatesan2016NeuralDG,
  title={Neural Dataset Generality},
  author={Ragav Venkatesan and Vijetha Gattupalli and Baoxin Li},
  journal={ArXiv},
  year={2016},
  volume={abs/1605.04369}
}
  • Ragav Venkatesan, Vijetha Gattupalli, Baoxin Li
  • Published 2016
  • Computer Science
  • ArXiv
  • Often the filters learned by Convolutional Neural Networks (CNNs) from different datasets appear similar. This is prominent in the first few layers. This similarity of filters is being exploited for the purposes of transfer learning and some studies have been made to analyse such transferability of features. This is also being used as an initialization technique for different tasks in the same dataset or for the same task in similar datasets. Off-the-shelf CNN features have capitalized on this… CONTINUE READING

    Tables and Topics from this paper.

    A Strategy for an Uncompromising Incremental Learner
    16

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 27 REFERENCES
    How transferable are features in deep neural networks?
    3831
    Very Deep Convolutional Networks for Large-Scale Image Recognition
    38299
    Learning Multiple Layers of Features from Tiny Images
    8791
    On the relationship between visual attributes and convolutional networks
    75
    Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
    1628
    Going deeper with convolutions
    19460
    Distilling the Knowledge in a Neural Network
    4096
    Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
    18336
    Caffe: Convolutional Architecture for Fast Feature Embedding
    11993