• Corpus ID: 238857235

Automatic Modeling of Social Concepts Evoked by Art Images as Multimodal Frames

@article{Pandiani2021AutomaticMO,
  title={Automatic Modeling of Social Concepts Evoked by Art Images as Multimodal Frames},
  author={Delfina Sol Martinez Pandiani and Valentina Presutti},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.07420}
}
Social concepts referring to non-physical objects–such as revolution, violence, or friendship–are powerful tools to describe, index, and query the content of visual data, including ever-growing collections of art images from the Cultural Heritage (CH) field. While much progress has been made towards complete image understanding in computer vision, automatic detection of social concepts evoked by images is still a challenge. This is partly due to the well-known semantic gap problem, worsened for… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 21 REFERENCES
Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations
TLDR
The Visual Genome dataset is presented, which contains over 108K images where each image has an average of $$35$$35 objects, $$26$$26 attributes, and $$21$$21 pairwise relationships between objects, and represents the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.
Distant viewing: analyzing large visual corpora
In this article we establish a methodological and theoretical framework for the study of large collections of visual materials. Our framework, distant viewing, is distinguished from other approaches
A Comparative Approach between Different Computer Vision Tools, Including Commercial and Open-source, for Improving Cultural Image Access and Analysis
TLDR
This pilot study presents an approach for testing different commercial and open-source computer vision tools on a set of selected cultural food images from the Europeana collection with regard to producing relevant concepts, and preliminary results showed that not only quantitative output results are important, but also the quality of concepts generated.
Integrating Knowledge and Reasoning in Image Understanding
TLDR
This work presents a brief survey of a few representative reasoning mechanisms, knowledge integration methods and their corresponding image understanding applications developed by various groups of researchers, approaching the problem from a variety of angles.
Varieties of abstract concepts: development, use and representation in the brain
TLDR
The theme issue provides an integrated theoretical account that highlights the importance of language, sociality and inner processes for abstract concepts, and that offers new methodological tools to investigate them.
Building semantic memory from embodied and distributional language experience.
TLDR
This work synthesizes several approaches which, taken together, suggest that linguistic and embodied experience should instead be considered as inseparably entangled: just as sensory and perceptual systems are reactivated to understand meaning, so are experience-based representations endemic to linguistic processing.
Multimodal big data affective analytics: A comprehensive survey using text, audio, visual and physiological signals
TLDR
The paper includes extensive reviews on different frameworks and categories for state-of-the-art techniques, critical analysis of their performances, and discussions of their applications, trends and future directions to serve as guidelines for readers towards this emerging research area.
Social Roles and their Descriptions
TLDR
This paper establishes a general formal framework for developing a foundational ontology of socially constructed entities, in the broadest sense of this notion, and further contributes to understanding the ontological nature of roles.
Representation of Concepts as Frames
Concepts can be represented as frames, i.e., recursive attribute-value structures. Frames assign unique values to attributes. Concepts can be classified into four groups with respect to both
DeepSentiBank: Visual Sentiment Concept Classification with Deep Convolutional Neural Networks
TLDR
Performance evaluation shows the newly trained deep CNNs model SentiBank 2.0 (or called DeepSentiBank) is significantly improved in both annotation accuracy and retrieval performance, compared to its predecessors which mainly use binary SVM classification models.
...
1
2
3
...