Corpus ID: 229340136

O2A: One-shot Observational learning with Action vectors

@inproceedings{Pauly2018O2AOO,
  title={O2A: One-shot Observational learning with Action vectors},
  author={Leo Pauly and Wisdom C. Agboh and David C. Hogg and Raul Fuentes},
  year={2018}
}
We present O2A, a novel method for learning to perform robotic manipulation tasks from a single (one-shot) third-person demonstration video. To our knowledge, it is the first time this has been done for a single demonstration. The key novelty lies in pre-training a feature extractor for creating a perceptual representation for actions that we call 'action vectors'. The action vectors are extracted using a 3D-CNN model pre-trained as an action classifier on a generic action dataset. The distance… Expand