Learn More
In recent years, Bayesian models have become increasingly popular as a way of understanding human cognition. Ideal learner Bayesian models assume that cognition can be usefully understood as optimal behavior under uncertainty, a hypothesis that has been supported by a number of modeling studies across various domains The models in these studies aim to(More)
1. Why look at language acquisition? Though it is not always directly stated, the debate at the center of this volume is in many ways driven by language acquisition considerations. Long-­‐distance dependencies are themselves relatively complex, as they involve context-­‐sensitive grammatical operations (e.g., wh-­‐movement or slash-­‐passing). The existence(More)
Purely statistical models have accounted for infants' early ability to segment words out of fluent speech, with Bayesian models performing best (Goldwater et al. 2009). Yet these models often incorporate unlikely assumptions, such as infants having unlimited processing and memory resources and knowing the full inventory of phonemes in their native language.(More)
I completely agree with Ambridge, Pine, and Lieven (AP&L) that anyone proposing a learning-strategy component needs to demonstrate precisely how that component helps solve the language acquisition task. To this end, I discuss how computational modeling is a tool well suited to doing exactly this, and that it has the added benefit of highlighting hidden(More)
Subtle social information is available in text such as a speaker's emotional state, intentions, and attitude, but current information extraction systems are unable to extract this information at the level that humans can. We describe a methodology for creating databases of messages annotated with social information based on interactive games between humans(More)
2 Abstract The induction problems facing language learners have played a central role in debates about the types of learning biases that exist in the human brain. Many linguists have argued that some of the learning biases necessary to solve these language induction problems must be both innate and language-specific (i.e., the Universal Grammar (UG)(More)
Information extraction researchers have recently recognized that more subtle information beyond the basic semantic content of a message can be communicated via linguistic features in text, such as sentiments, emotions, perspectives, and intentions. One way to describe this information is that it represents something about the generator's mental state, which(More)
The frequent occurrence of divergences|structural diier-ences between languages|presents a great challenge for statistical word-level alignment. In this paper, we introduce DUSTer, a method for systematically identifying common divergence types and transforming an English sentence structure to bear a closer resemblance to that of another language. Our(More)