• Corpus ID: 231847003

Child-directed Listening: How Caregiver Inference Enables Children's Early Verbal Communication

  title={Child-directed Listening: How Caregiver Inference Enables Children's Early Verbal Communication},
  author={Stephan C. Meylan and Ruthe Foushee and Elika Bergelson and Roger Philip Levy},
How do adults understand children’s speech? Children’s productions over the course of language development often bear little resemblance to typical adult pronunciations, yet caregivers nonetheless reliably recover meaning from them. Here, we employ a suite of Bayesian models of spoken word recognition to understand how adults overcome the noisiness of child language, showing that communicative success between children and adults relies heavily on adult inferential processes. By evaluating… 

Figures and Tables from this paper


Word-minimality, Epenthesis and Coda Licensing in the Early Acquisition of English
The results suggest that learners of English may exhibit an early awareness of moraic structure at the level of the syllable, but that language-specific constraints regarding word-minimality may be acquired later than originally thought.
Rational integration of noisy evidence and prior semantic expectations in sentence interpretation
Four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task are evaluated, strongly suggesting that human language relies on rational statistical inference over a noisy channel.
MacArthur‐Bates Communicative Development Inventories
There are multiple means for obtaining information about how children develop language. They can be observed in natural play situations, elicitation techniques may be employed, or parents can report
Patterns of English phoneme confusions by native and non-native listeners.
It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
A Noisy-Channel Model of Human Sentence Comprehension under Uncertain Input
It is argued that by explicitly accounting for inputlevel noise in sentence processing, the model provides solutions for these outstanding problems in the psycholinguistic literature and broadens the scope of theories of human sentence comprehension as rational probabilistic inference.
Shortlist B: a Bayesian model of continuous speech recognition.
Simulations are presented showing that the model can account for key findings: data on the segmentation of continuous speech, word frequency effects, the effects of mispronunciations on word recognition, and evidence on lexical involvement in phonemic decision making.
childes-db: A flexible and reproducible interface to the child language data exchange system
Childes-db is introduced, a database-formatted mirror of CHILDES that improves data accessibility and usability by offering novel interfaces, including browsable web applications and an R application programming interface (API).
Masked Language Model Scoring
RoBERTa reduces an end-to-end LibriSpeech model’s WER by 30% relative and adds up to +1.7 BLEU on state-of-the-art baselines for low-resource translation pairs, with further gains from domain adaptation.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.