Probabilistic models of language processing and acquisition

  title={Probabilistic models of language processing and acquisition},
  author={Nick Chater and Christopher D. Manning},
  journal={Trends in Cognitive Sciences},

Figures and Tables from this paper

Hierarchical probabilistic model of language acquisition

This thesis proposes an unsupervised computational model of language acquisition through visual grounding that takes an advantage of probabilistic Bayesian models which are besides neural networks one of the main tools used in a computational cognitive modeling.

Structures and distributions in morphology learning

This thesis investigates the acquisition of morphology and forms a cognitively-oriented computational framework for studying language acquisition that consists of four components: the linguistic representation, the statistical distribution of the input data, the observed behavior of the human learner, and the performance of the learning algorithm.

Connectionist perspectives on language learning, representation and processing.

It is argued that connectionist models can capture many important characteristics of how language is learned, represented, and processed, as well as providing new insights about the source of these behavioral patterns.

A computational model of the emergence of early constructions

This thesis explores and formalizes the view that grammar learning is driven by meaningful language use in context, and presents a computational model in which all aspects of the language learning problem are reformulated in line with these assumptions.

Cognitive Biases, Linguistic Universals, and Constraint-Based Grammar Learning

This study illustrates how ideas from the domains of linguistic theory, computational cognitive science, and machine learning can be synthesized into a model of language learning in which biases range in strength from hard to soft (statistical), and in which language-specific and domain-general biases combine to account for data from the macro-level scale of typological distribution to the micro- level scale of learning by individuals.

Computational Modeling of Human Language Acquisition

Children are a source of inspiration for any such study of language learnability: they learn language with ease, and their acquired knowledge of language is flexible and robust.

Statistical models of learning and using semantic representations

This research suggests general principles of computation over structured knowledge representations illuminates how people make sense of the world around them, and may lead to developing machines that think more like people do.

Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning

It is demonstrated in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented, and suggests that people use information about the way in which linguistic input is sampled to guide their learning.

Introduction to the Special Issue: Parsimony and Redundancy in Models of Language

The present special issue does not aim to present or discuss the arguments for and against the two epistemological stances or discuss evidence that supports either of them, but conceives of linguistic knowledge as being induced from experience.



A probabilistic constraints approach to language acquisition and processing

Language Acquisition and Use: Learning and Applying Probabilistic Constraints

An alternative view is emerging from studies of statistical and probabilistic aspects of language, connectionist models, and the learning capacities of infants that retains the idea that innate capacities constrain language learning, but calls into question whether they include knowledge of grammatical structure.

Formal models of language learning

Statistical Models of Language Learning and Use

This paper summarizes the recent work in developing statistical models of language which are compatible with the kinds of linguistic structures posited by current linguistic theories and provides a high-level overview of both the general approach and the methods developed.

Wide-Coverage Probabilistic Sentence Processing

It is argued incremental probabilistic parsing models are, in general, extremely well suited to explaining this dual nature—generally good and occasionally pathological—of human linguistic performance.

A Probabilistic Model of Lexical and Syntactic Access and Disambiguation

A single probabilistic algorithm is presented, based on a parallel parser which ranks constructions for access, and interpretations for disambiguation, by their conditional probability, arguing for a more uniform representation of linguistic knowledge and for the use of probabilistically-enriched grammars and interpreters as models of human knowledge of and processing of language.

Head-Driven Statistical Models for Natural Language Parsing

Three statistical models for natural language parsing are described, leading to approaches in which a parse tree is represented as the sequence of decisions corresponding to a head-centered, top-down derivation of the tree.

Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars

A learning algorithm is described that takes as input a training set of sentences labeled with expressions in the lambda calculus and induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence.

Rules vs. analogy in English past tenses: a computational/experimental study