Benjamin Börschinger

Learn More
It is often assumed that 'grounded' learning tasks are beyond the scope of grammatical inference techniques. In this paper, we show that the grounded task of learning a semantic parser from ambiguous training data as discussed in Kim and Mooney (2010) can be reduced to a Probabilistic Context-Free Grammar learning task in a way that gives state of the art(More)
We summarize the accomplishments of a multidisciplinary workshop exploring the computational and scientific issues surrounding zero resource (unsupervised) speech technologies and related models of early language acquisition. Centered around the tasks of pho-netic and lexical discovery, we consider unified evaluation metrics, present two new approaches for(More)
We present a novel extension to a recently proposed incremental learning algorithm for the word segmentation problem originally introduced in Goldwater (2006). By adding rejuve-nation to a particle filter, we are able to considerably improve its performance, both in terms of finding higher probability and higher accuracy solutions.
Stress has long been established as a major cue in word segmentation for English infants. We show that enabling a current state-of-the-art Bayesian word segmentation model to take advantage of stress cues noticeably improves its performance. We find that the improvements range from 10 to 4%, depending on both the use of phonotactic cues and, to a lesser(More)
Studies of computational models of language acquisition depend to a large part on the input available for experiments. In this paper, we study the effect that input size has on the performance of word segmentation models embodying different kinds of linguistic assumptions. Because currently available corpora for word segmentation are not suited for(More)
Word-final /t/-deletion refers to a common phenomenon in spoken English where words such as /wEst/ " west " are pronounced as [wEs] " wes " in certain contexts. Phonological variation like this is common in naturally occurring speech. Current computational models of unsu-pervised word segmentation usually assume idealized input that is devoid of these kinds(More)
Using the example of Murrinh-Patha, Seiss (2011) illustrates how Australian Aboriginal languages can shed light on the morphology-syntax interface: one aspect of their polysynthetic nature is that information often encoded in phrases and clauses in other languages is instead found in a single morphological word. In this paper, we look at another instance,(More)
This paper extends existing word segmentation models to take non-linguistic context into account. It improves the token F-score of a top performing segmentation models by 2.5% on a 27k utterances dataset. We posit that word segmentation is easier in-context because the learner is not trying to access irrelevant lexical items. We use topics from a Latent(More)
Cross-linguistic studies on unsupervised word segmentation have consistently shown that English is easier to segment than other languages. In this paper, we propose an explanation of this finding based on the notion of segmentation ambiguity. We show that English has a very low segmentation ambiguity compared to Japanese and that this difference correlates(More)