#### Filter Results:

#### Publication Year

1989

2016

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

- Paul Boersma, Bruce Hayes
- 1999

The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion initiated the learnability research program for Optimality Theory. We argue that the… (More)

- Paul Boersma
- 1997

Variation is controlled by the grammar, though indirectly: it follows automatically from the robustness requirement of learning. If every constraint in an Optimality-Theoretic grammar has a ranking value along a continuous scale, and the disharmony of a constraint at evaluation time is randomly distributed about this value, the phenomenon of optionality in… (More)

- Paul Boersma
- 1993

We present a straightforward and robust algorithm for periodicity detection, working in the lag (autocorrelation) domain. When it is tested for periodic signals and for signals with additive noise or jitter, it proves to be several orders of magnitude more accurate than the methods commonly used for speech analysis. This makes our method capable of… (More)

This paper investigates a gradual on-line learning algorithm for Harmonic Grammar. By adapting existing convergence proofs for perceptrons, we show that for any nonvarying target language, Harmonic-Grammar learners are guaranteed to converge to an appropriate grammar, if they receive complete information about the structure of the learning data. We also… (More)

- Paul Boersma
- 2008

This paper shows that Error-Driven Constraint Demotion (EDCD), an error-driven learning algorithm proposed by Tesar (1995) for Prince and Smolensky's (1993) version of Optimality Theory, can fail to converge to a totally ranked hierarchy of constraints, unlike the earlier non-error-driven learning algorithms proposed by Tesar and Smolensky (1993). The cause… (More)

A series of experiments shows that Spanish learners of English acquire the ship-sheep contrast in a way specific to their target dialect (Scottish or Southern British English) and that many learners exhibit a perceptual strategy found in neither Spanish nor English. To account for these facts as well as for the findings of earlier research on second… (More)

- PAUL BOERSMA, CLARA LEVELT
- 1999

We will show that the Gradual Constraint-Ranking Learning Algorithm is capable of modelling attested acquisition orders and learning curves in a realistic manner, thus bridging the gap that used to exist between formal computational learning algorithms and actual acquisition data. Levelt, Schiller, and Levelt (to appear) found that the acquisition order for… (More)

- Arto Anttila Reviewed, Paul Boersma, Arto Anttila

by the author Variation, preferences, and subregularities can be derived from one and the same grammar if we assume that grammars are partial orderings of vio-lable constraints. This is the claim defended in this dissertation. The argument is based on detailed analyses of the Finnish nominal declension. 1. Free variation The Finnish genitive plural has… (More)

Second-language (L2) speech perception research has identified several patterns in the non-native perception of vowel contrasts. The models that account for non-native perception (Perceptual Assimilation Model, Best 1995; and Speech Learning Model, Flege 1995; for a summary of these and other non-native perception models see introduction in Best, McRoberts… (More)

- Paul Boersma
- 1999

This tutorial yields a step-by-step introduction to stochastic OT grammars and about how you can use the Gradual Learning Algorithm available in the Praat program to help you rank Optimality-Theoretic constraints in ordinal and stochastic grammars. This tutorial describes how you can draw Optimality-Theoretic tableaus and simulate Optimality-Theoretic… (More)