Probabilistic models of language processing and acquisition

@article{Chater2006ProbabilisticMO,
  title={Probabilistic models of language processing and acquisition},
  author={Nick Chater and Christopher D. Manning},
  journal={Trends in Cognitive Sciences},
  year={2006},
  volume={10},
  pages={335-344}
}

Figures and Tables from this paper

Hierarchical probabilistic model of language acquisition

TLDR
This thesis proposes an unsupervised computational model of language acquisition through visual grounding that takes an advantage of probabilistic Bayesian models which are besides neural networks one of the main tools used in a computational cognitive modeling.

Structures and distributions in morphology learning

TLDR
This thesis investigates the acquisition of morphology and forms a cognitively-oriented computational framework for studying language acquisition that consists of four components: the linguistic representation, the statistical distribution of the input data, the observed behavior of the human learner, and the performance of the learning algorithm.

Connectionist perspectives on language learning, representation and processing.

TLDR
It is argued that connectionist models can capture many important characteristics of how language is learned, represented, and processed, as well as providing new insights about the source of these behavioral patterns.

A computational model of the emergence of early constructions

TLDR
This thesis explores and formalizes the view that grammar learning is driven by meaningful language use in context, and presents a computational model in which all aspects of the language learning problem are reformulated in line with these assumptions.

Cognitive Biases, Linguistic Universals, and Constraint-Based Grammar Learning

TLDR
This study illustrates how ideas from the domains of linguistic theory, computational cognitive science, and machine learning can be synthesized into a model of language learning in which biases range in strength from hard to soft (statistical), and in which language-specific and domain-general biases combine to account for data from the macro-level scale of typological distribution to the micro- level scale of learning by individuals.

Statistical models of learning and using semantic representations

TLDR
This research suggests general principles of computation over structured knowledge representations illuminates how people make sense of the world around them, and may lead to developing machines that think more like people do.

Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning

TLDR
It is demonstrated in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented, and suggests that people use information about the way in which linguistic input is sampled to guide their learning.

Introduction to the Special Issue: Parsimony and Redundancy in Models of Language

TLDR
The present special issue does not aim to present or discuss the arguments for and against the two epistemological stances or discuss evidence that supports either of them, but conceives of linguistic knowledge as being induced from experience.

Computational evaluation of the Traceback Method

TLDR
A rigorous computational evaluation of the Traceback Method is presented, explaining some of the phenomena associated with children's ability to generalize previously heard utterances and generate novel ones and suggesting directions for improving it.
...

References

SHOWING 1-10 OF 112 REFERENCES

A probabilistic constraints approach to language acquisition and processing

Language Acquisition and Use: Learning and Applying Probabilistic Constraints

TLDR
An alternative view is emerging from studies of statistical and probabilistic aspects of language, connectionist models, and the learning capacities of infants that retains the idea that innate capacities constrain language learning, but calls into question whether they include knowledge of grammatical structure.

Statistical Models of Language Learning and Use

TLDR
This paper summarizes the recent work in developing statistical models of language which are compatible with the kinds of linguistic structures posited by current linguistic theories and provides a high-level overview of both the general approach and the methods developed.

Wide-Coverage Probabilistic Sentence Processing

TLDR
It is argued incremental probabilistic parsing models are, in general, extremely well suited to explaining this dual nature—generally good and occasionally pathological—of human linguistic performance.

A Probabilistic Model of Lexical and Syntactic Access and Disambiguation

TLDR
A single probabilistic algorithm is presented, based on a parallel parser which ranks constructions for access, and interpretations for disambiguation, by their conditional probability, arguing for a more uniform representation of linguistic knowledge and for the use of probabilistically-enriched grammars and interpreters as models of human knowledge of and processing of language.

Head-Driven Statistical Models for Natural Language Parsing

TLDR
Three statistical models for natural language parsing are described, leading to approaches in which a parse tree is represented as the sequence of decisions corresponding to a head-centered, top-down derivation of the tree.

Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars

TLDR
A learning algorithm is described that takes as input a training set of sentences labeled with expressions in the lambda calculus and induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence.

Foundations of statistical natural language processing

TLDR
This foundational text is the first comprehensive introduction to statistical natural language processing (NLP) to appear and provides broad but rigorous coverage of mathematical and linguistic foundations, as well as detailed discussion of statistical methods, allowing students and researchers to construct their own implementations.
...