Commonly Uncommon: Semantic Sparsity in Situation Recognition

@article{Yatskar2017CommonlyUS,
  title={Commonly Uncommon: Semantic Sparsity in Situation Recognition},
  author={Mark Yatskar and V. Ordonez and Luke Zettlemoyer and Ali Farhadi},
  journal={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2017},
  pages={6335-6344}
}
  • Mark Yatskar, V. Ordonez, +1 author Ali Farhadi
  • Published 2017
  • Computer Science
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • Semantic sparsity is a common challenge in structured visual classification problems, when the output space is complex, the vast majority of the possible predictions are rarely, if ever, seen in the training set. This paper studies semantic sparsity in situation recognition, the task of producing structured summaries of what is happening in images, including activities, objects and the roles objects play within the activity. For this problem, we find empirically that most substructures required… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 21 CITATIONS

    Mixture-Kernel Graph Attention Network for Situation Recognition

    VIEW 4 EXCERPTS
    CITES METHODS, RESULTS & BACKGROUND
    HIGHLY INFLUENCED

    Graph neural network for situation recognition

    VIEW 4 EXCERPTS
    CITES METHODS, BACKGROUND & RESULTS
    HIGHLY INFLUENCED

    Grounded Situation Recognition

    VIEW 2 EXCERPTS
    CITES METHODS & BACKGROUND

    Weakly Supervised Visual Semantic Parsing

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Situation Recognition with Graph Neural Networks

    VIEW 6 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    Automatic generation of composite image descriptions

    • Chang Liu, Armin Shmilovici, Mark Last
    • Computer Science
    • 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)
    • 2017
    VIEW 8 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    MovieGraphs: Towards Understanding Human-Centric Situations from Videos

    VIEW 1 EXCERPT

    Unsupervised and Semi-Supervised Image Classification With Weak Semantic Consistency

    VIEW 3 EXCERPTS
    CITES METHODS & BACKGROUND

    LaSO: Label-Set Operations Networks for Multi-Label Few-Shot Learning

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Recurrent Models for Situation Recognition

    VIEW 9 EXCERPTS
    CITES BACKGROUND, METHODS & RESULTS
    HIGHLY INFLUENCED

    FILTER CITATIONS BY YEAR

    2017
    2020

    CITATION STATISTICS

    • 6 Highly Influenced Citations

    • Averaged 5 Citations per year from 2018 through 2020

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 48 REFERENCES

    DeViSE: A Deep Visual-Semantic Embedding Model

    VIEW 1 EXCERPT

    Predicting Deep Zero-Shot Convolutional Neural Networks Using Textual Descriptions

    VIEW 1 EXCERPT

    Situation Recognition: Visual Semantic Role Labeling for Image Understanding

    VIEW 10 EXCERPTS

    Learning Everything about Anything: Webly-Supervised Visual Concept Learning

    VIEW 2 EXCERPTS

    Attribute-Based Classification for Zero-Shot Visual Object Categorization

    VIEW 1 EXCERPT

    Show and tell: A neural image caption generator

    VIEW 1 EXCERPT