Using Cross-Model EgoSupervision to Learn Cooperative Basketball Intention

@article{Bertasius2017UsingCE,
  title={Using Cross-Model EgoSupervision to Learn Cooperative Basketball Intention},
  author={Gedas Bertasius and Jianbo Shi},
  journal={2017 IEEE International Conference on Computer Vision Workshops (ICCVW)},
  year={2017},
  pages={2355-2363}
}
  • Gedas Bertasius, Jianbo Shi
  • Published 2017
  • Computer Science, Psychology
  • 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
We present a first-person method for cooperative basketball intention prediction: we predict with whom the camera wearer will cooperate in the near future from unlabeled first-person images. This is a challenging task that requires inferring the camera wearer's visual attention, and decoding the social cues of other players. Our key observation is that a first-person view provides strong cues to infer the camera wearer's momentary visual attention, and his/her intentions. We exploit this… Expand
2 Citations

Figures, Tables, and Topics from this paper

EgoVQA - An Egocentric Video Question Answering Benchmark Dataset
  • Chenyou Fan
  • Computer Science
  • 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
  • 2019
  • 6
  • PDF
Leveraging the Present to Anticipate the Future in Videos
  • 18
  • PDF

References

SHOWING 1-10 OF 51 REFERENCES
Predicting Behaviors of Basketball Players from First Person Videos
  • 33
  • PDF
Am I a Baller? Basketball Performance Assessment from First-Person Videos
  • 36
  • PDF
Going Deeper into First-Person Activity Recognition
  • 217
  • PDF
First Person Action-Object Detection with EgoNet
  • 37
  • PDF
Action Recognition in the Presence of One Egocentric and Multiple Static Cameras
  • 24
  • PDF
Predicting Important Objects for Egocentric Video Summarization
  • 114
  • PDF
Delving into egocentric actions
  • 175
  • PDF
Fast unsupervised ego-action learning for first-person sports videos
  • 224
  • PDF
Detecting Engagement in Egocentric Video
  • 33
  • PDF
First Person Action Recognition Using Deep Learned Descriptors
  • 125
  • PDF
...
1
2
3
4
5
...