Corpus ID: 195750826

Supervise Thyself: Examining Self-Supervised Representations in Interactive Environments

@article{Racah2019SuperviseTE,
  title={Supervise Thyself: Examining Self-Supervised Representations in Interactive Environments},
  author={Evan Racah and C. Pal},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.11951}
}
  • Evan Racah, C. Pal
  • Published 2019
  • Mathematics, Computer Science
  • ArXiv
  • Self-supervised methods, wherein an agent learns representations solely by observing the results of its actions, become crucial in environments which do not provide a dense reward signal or have labels. In most cases, such methods are used for pretraining or auxiliary tasks for "downstream" tasks, such as control, exploration, or imitation learning. However, it is not clear which method's representations best capture meaningful features of the environment, and which are best suited for which… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    References

    SHOWING 1-10 OF 34 REFERENCES
    Curiosity-Driven Exploration by Self-Supervised Prediction
    • 818
    • PDF
    Time-Contrastive Networks: Self-Supervised Learning from Video
    • 203
    • PDF
    Reinforcement Learning with Unsupervised Auxiliary Tasks
    • 640
    • PDF
    Playing hard exploration games by watching YouTube
    • 122
    • Highly Influential
    • PDF
    Learning state representations with robotic priors
    • 113
    • PDF
    Large-Scale Study of Curiosity-Driven Learning
    • 239
    • PDF
    Learning to Poke by Poking: Experiential Learning of Intuitive Physics
    • 339
    • PDF
    Learning and Using the Arrow of Time
    • 117
    • PDF
    Contingency-Aware Exploration in Reinforcement Learning
    • 33
    • PDF
    Learning Features by Watching Objects Move
    • 289
    • PDF