• Corpus ID: 269866

Convergence Bounds for Language Evolution by Iterated Learning

  title={Convergence Bounds for Language Evolution by Iterated Learning},
  author={Thomas L. Griffiths and Dan Klein and Anna N. Rafferty},
Convergence Bounds for Language Evolution by Iterated Learning Anna N. Rafferty (rafferty@cs.berkeley.edu) Computer Science Division, University of California, Berkeley, CA 94720 USA Thomas L. Griffiths (tom griffiths@berkeley.edu) Department of Psychology, University of California, Berkeley, CA 94720 USA Dan Klein (klein@cs.berkeley.edu) Computer Science Division, University of California, Berkeley, CA 94720 USA Abstract Similarities between human languages are often taken as ev- idence of… 
Cultural Transmission and Inductive Biases in Populations of Bayesian Learners
The aim of this work is to test whether the strong results obtained through analytical methods in this framework also extend to finite populations of Bayesian learners, and to investigate what other effects richer population dynamics have on the results.
Iterated Learning in Dynamic Social Networks
This work shows that by varying the lengths of the learning sessions over time or by keeping the networks dynamic, it is possible for iterated learning to endure forever with arbitrarily small loss.
Self-Sustaining Iterated Learning
This work characterize iterated learnability in geometric terms and shows how a slight, steady increase in the lengths of the training sessions ensures self-sustainability for any discrete language class.
Emergent Generalization in Bayesian Agents Using Iterated Learning
Calculating likelihood based on agent comprehension is shown to result in the emergence of significantly better generalization, and the beneficial effect of a description-length based prior probability is also demonstrated.
Connecting human and machine learning via probabilistic models of cognition
This work will talk about how probabilistic models can be used to identify the assumptions of learners, learn at different levels of abstraction, and link the inductive biases of individuals to cultural universals.
Learning bias in stress windows: Frequency and attestation
This paper shows that the relationship between window size and frequency is not surprising given learning considerations, and shows that this bias emerges from an iterated learning model using an online learner of Maximum Entropy grammar, incorporating current approaches to phonological grammar and learning to explicitly model frequency.
Evolution and impact of bias in human and machine learning algorithm interaction
It is argued that algorithmic bias interacts with humans in an iterative manner, which has a long-term effect on algorithms’ performance, and three different iterated bias modes, as well as initial training data class imbalance and human action, do affect the models learned by machine learning algorithms.
The Effects of Cultural Transmission Are Modulated by the Amount of Information Transmitted
Information changes as it is passed from person to person, with this process of cultural transmission allowing the minds of individuals to shape the information that they transmit. We present
How many trials does it take to collect all dierent types of a population with probability p
Coupons are collected one at a time (independently and with replacement) from a population containing N distinct types. This process is repeated until all N dierent types (coupons) have been
Studying and handling iterated algorithmic biases in human and machine learning interaction.
This research presents a novel approach called “Smartphones,” which combine the power of the human brain and the computer to solve the challenge of human-machine interaction.


A Bayesian View of Language Evolution by Iterated Learning - eScholarship
This paper analyzes how languages change as the result of a particular form of interaction: agents learning from one another and shows that, when the learners are rational Bayesian agents, this process of iterated learning con- verges to the prior distribution over languages assumed by those learners.
Spontaneous evolution of linguistic structure-an iterated learning model of the emergence of regularity and irregularity
  • S. Kirby
  • Biology
    IEEE Trans. Evol. Comput.
  • 2001
A computationally implemented model of the transmission of linguistic behavior over time and a realistic distribution of string lengths and patterns of stable irregularity emerges, suggesting that the ILM is a good model for the evolution of some of the fundamental features of human language.
Innateness and culture in the evolution of language
It is shown that cultural transmission can magnify weak biases into strong linguistic universals, undermining one of the arguments for strong innate constraints on language learning.
Iterated Learning: A Framework for the Emergence of Language
Two models are presented, based upon the iterated learning framework, which show that the poverty of the stimulus available to language learners leads to the emergence of linguistic structure.
Explaining Language Universals
Part 1 Introduction: explaining language universals, John A.Hawkins. Part 2 Innateness and Learnability: the innateness hypothesis, Teun Hoekstra and Jan G.Kooij language acquisitions - schemas
Random Walks on Finite Groups
This article gives a general overview of Markov chains on finite sets, and how the structure of a particular class of groups relates to the mixing time of natural random walks on those groups.
Optimality Theory: Constraint Interaction in Generative Grammar
Prefactory Note. Acknowledgments. 1. Preliminaries:. Background and Overview. Optimality. Overall Structure of the Argument. Overview of Part I. 2. Optimality in Grammar: Core Syllabification in
वाक्यविन्यास का सैद्धान्तिक पक्ष = Aspects of the theory of syntax
Methodological preliminaries of generative grammars as theories of linguistic competence; theory of performance; organization of a generative grammar; justification of grammar; descriptive and explanatory theories; evaluation procedures; linguistic theory and language learning.
Parameters and universals
This is a collection of previously published essays on comparative syntax by the distinguished linguist Richard Kayne. The papers cover issues of comparative syntax as they are applied to French,
Universals of Language