Language (Re)modelling: Towards Embodied Language Understanding

@article{Tamari2020LanguageT,
  title={Language (Re)modelling: Towards Embodied Language Understanding},
  author={Ronen Tamari and Cheng Shani and Tom Hope and Miriam R. L. Petruck and Omri Abend and Dafna Shahaf},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.00311}
}
While natural language understanding (NLU) is advancing rapidly, today’s technology differs from human-like language understanding in fundamental ways, notably in its inferior efficiency, interpretability, and generalization. This work proposes an approach to representation and learning based on the tenets of embodied cognitive linguistics (ECL). According to ECL, natural language is inherently executable (like programming languages), driven by mental simulation and metaphoric mappings over… Expand
4 Citations
A Generative Symbolic Model for More General Natural Language Understanding and Reasoning
TLDR
A new fully-symbolic Bayesian model of semantic parsing and reasoning is presented which is fully interpretable and Bayesian, designed specifically with generality in mind, and therefore provides a clearer path for future research to expand its capabilities. Expand
A Systematic Survey of Text Worlds as Embodied Natural Language Environments
TLDR
This systematic survey outlines recent developments in tooling, environments, and agent modeling for Text Worlds, while examining recent trends in knowledge graphs, common sense reasoning, transfer learning of Text World performance to higher-fidelity environments, as well as near-term development targets that make Text Worlds an attractive general research paradigm for natural language processing. Expand
It’s the Meaning That Counts: The State of the Art in NLP and Semantics
TLDR
This work reviews the state of computational semantics in NLP and investigates how different lines of inquiry reflect distinct understandings of semantics and prioritize different layers of linguistic meaning. Expand
(Re)construing Meaning in NLP
TLDR
This paper engages with an idea largely absent from discussions of meaning in natural language understanding—namely, that the way something is expressed reflects different ways of conceptualizing or construing the information being conveyed. Expand

References

SHOWING 1-10 OF 119 REFERENCES
Ingredients of intelligence: From classic debates to an engineering roadmap
TLDR
This response covers three main dimensions of disagreement: nature versus nurture, coherent theories versus theory fragments, and symbolic versus sub-symbolic representations in artificial intelligence and cognitive science. Expand
Conceptual Alignment: How Brains Achieve Mutual Understanding
TLDR
The evidence suggests that communicators and addressees achieve mutual understanding by using the same computational procedures, implemented in the same neuronal substrate, and operating over temporal scales independent from the signals' occurrences. Expand
On Making Reading Comprehension More Comprehensive
TLDR
This work justifies a question answering approach to reading comprehension and describes the various kinds of questions one might use to more fully test a system’s comprehension of a passage, moving beyond questions that only probe local predicate-argument structures. Expand
Question Answering is a Format; When is it Useful?
TLDR
It is argued that question answering should be considered a format which is sometimes useful for studying particular phenomena, not a phenomenon or task in itself. Expand
Question answering is a format
  • 2019
Learning to activate logic rules for textual reasoning
TLDR
A novel reasoning model which learns to activate logic rules explicitly via deep reinforcement learning takes the form of Memory Networks but features a special memory that stores relational tuples, mimicking the "Image Schema" in human cognitive activities. Expand
The Consciousness Prior
TLDR
A new prior is proposed for learning representations of high-level concepts of the kind the authors manipulate with language, inspired by cognitive neuroscience theories of consciousness, that makes it natural to map conscious states to natural language utterances or to express classical AI knowledge in a form similar to facts and rules. Expand
Simpler Context-Dependent Logical Forms via Model Projections
TLDR
This work considers the task of learning a context-dependent mapping from utterances to denotations, and performs successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Expand
The emulation theory of representation: Motor control, imagery, and perception
  • R. Grush
  • Psychology, Computer Science
  • Behavioral and Brain Sciences
  • 2004
TLDR
The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain, including reasoning, theory of mind phenomena, and language. Expand
Ecological Semantics: Programming Environments for Situated Language Understanding
TLDR
It is argued that models must begin to understand and program in the language of affordances both for online, situated discourse comprehension, as well as large-scale, offline common-sense knowledge mining, in an environment-oriented ecological semantics. Expand
...
1
2
3
4
5
...