Corpus ID: 221949079

G-SimCLR : Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling

@article{Chakraborty2020GSimCLRS,
  title={G-SimCLR : Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling},
  author={S. Chakraborty and Aritra Roy Gosthipaty and Sayak Paul},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.12007}
}
  • S. Chakraborty, Aritra Roy Gosthipaty, Sayak Paul
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
  • In the realms of computer vision, it is evident that deep neural networks perform better in a supervised setting with a large amount of labeled data. The representations learned with supervision are not only of high quality but also helps the model in enhancing its accuracy. However, the collection and annotation of a large dataset are costly and time-consuming. To avoid the same, there has been a lot of research going on in the field of unsupervised visual representation learning especially in… CONTINUE READING

    Figures and Tables from this paper.

    References

    SHOWING 1-10 OF 26 REFERENCES
    A Simple Framework for Contrastive Learning of Visual Representations
    • 498
    • PDF
    Self-labelling via simultaneous clustering and representation learning
    • 45
    • PDF
    Unsupervised Representation Learning by Predicting Image Rotations
    • 584
    • PDF
    Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles
    • 696
    • PDF
    Selfie: Self-supervised Pretraining for Image Embedding
    • 30
    • PDF
    Supervised Contrastive Learning
    • 43
    • PDF
    Momentum Contrast for Unsupervised Visual Representation Learning
    • 456
    • PDF
    Very Deep Convolutional Networks for Large-Scale Image Recognition
    • 41,308
    • PDF
    Learning Multiple Layers of Features from Tiny Images
    • 9,391
    • Highly Influential
    • PDF
    Rethinking the Inception Architecture for Computer Vision
    • 9,188
    • PDF