Multimodal Analogical Reasoning over Knowledge Graphs

@article{Zhang2022MultimodalAR,
  title={Multimodal Analogical Reasoning over Knowledge Graphs},
  author={Ningyu Zhang and Lei Li and Xiang Chen and Xiaozhuan Liang and Shumin Deng and Huajun Chen},
  journal={ArXiv},
  year={2022},
  volume={abs/2210.00312}
}
Analogical reasoning is fundamental to human cognition and holds an important place in various fields. However, previous studies mainly focus on single-modal analogical reasoning and ignore taking advantage of structure knowledge. No-tably, the research in cognitive psychology has demonstrated that information from multimodal sources always brings more powerful cognitive transfer than single modality sources. To this end, we introduce the new task of multimodal analogical reasoning over… 

References

SHOWING 1-10 OF 45 REFERENCES

Is Visual Context Really Helpful for Knowledge Graph? A Representation Learning Perspective

It is concluded that under appropriate circumstances models are capable of leveraging the visual input to generate better knowledge graph embeddings and vice versa.

Learning to Make Analogies by Contrasting Abstract Relational Structure

This work studies how analogical reasoning can be induced in neural networks that learn to perceive and reason about raw visual data and finds that the critical factor for inducing such a capacity is not an elaborate architecture, but rather, careful attention to the choice of data and the manner in which it is presented to the model.

E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning

A first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR), which consists of 1,655 problems sourced from the Civil Service Exams, and a free-text explanation scheme to explain whether an analogy should be drawn.

RAVEN: A Dataset for Relational and Analogical Visual REasoNing

This work proposes a new dataset, built in the context of Raven's Progressive Matrices (RPM) and aimed at lifting machine intelligence by associating vision with structural, relational, and analogical reasoning in a hierarchical representation and establishes a semantic link between vision and reasoning by providing structure representation.

Analogical Reasoning for Visually Grounded Language Acquisition

A multimodal transformer model augmented with a novel mechanism for analogical reasoning, which approximates novel compositions by learning semantic mapping and reasoning operations from previously seen compositions.

Stratified Rule-Aware Network for Abstract Visual Reasoning

A Stratified Rule-Aware Network (SRAN) is proposed to generate the rule embeddings for two input sequences, which outperforms the state-of-the-art models by a considerable margin.

Deep Learning Methods for Abstract Visual Reasoning: A Survey on Raven's Progressive Matrices

This paper focuses on the most common type of AVR tasks—the Raven's Progressive Matrices (RPMs)—and provides a comprehensive review of the learning methods and deep neural models applied to solve RPMs, as well as, the RPM benchmark sets.

Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion

This paper proposes a hybrid transformer architecture with unified input-output for diverse multimodal knowledge graph completion tasks, and proposes multi-level fusion, which integrates visual and text representation via coarse-grained prefix-guided interaction and fine- grained correlation-aware fusion modules.

Measuring abstract reasoning in neural networks

A dataset and challenge designed to probe abstract reasoning, inspired by a well-known human IQ test, is proposed and ways to both measure and induce stronger abstract reasoning in neural networks are introduced.

Multi-Modal Knowledge Graph Construction and Application: A Survey

In this survey on MMKGs constructed by texts and images, the challenges, progresses and opportunities on the construction and application of MMKG respectively are reviewed, with detailed analyses of the strengths and weaknesses of different solutions.