Stanley Peters

Learn More
In designing and building tutorial dialogue systems it is important not only to understand the tactics employed by human tutors but also to understand how tutors decide when to use various tactics. We argue that these decisions are based not only on student problem-solving steps and the content of student utterances, but also on the meta-communicative(More)
We use directed graphical models (DGMs) to automatically detect decision discussions in multi-party dialogue. Our approach distinguishes between different dialogue act (DA) types based on their role in the formulation of a decision. DGMs enable us to model dependencies, including sequential ones. We summarize decisions by extracting suitable phrases from(More)
We address the problem of identifying words and phrases that accurately capture, or contribute to, the semantic gist of decisions made in multi-party human-human meetings. We first describe our approach to modelling decision discussions in spoken meetings and then compare two approaches to extracting information from these discussions. The first one uses an(More)
We describe a process for automatically detecting decision-making sub-dialogues in transcripts of multi-party, human-human meetings. Extending our previous work on action item identification, we propose a structured approach that takes into account the different roles utterances play in the decisionmaking process. We show that this structured approach(More)
We present the first demonstration version of the WITAS dialogue system for multi-modal conversations with autonomous mobile robots, and motivate several innovations currently in development for version II. The human-robot interaction setting is argued to present new challenges for dialogue system engineers, in comparison to previous work in dialogue(More)
This paper addresses the problem of identifying action items discussed in open-domain conversational speech, and does so in two stages: firstly, detecting the subdialogues in which action items are proposed, discussed and committed to; and secondly, extracting the phrases that accurately capture or summarize the tasks they involve. While the detection(More)
We explain dialogue management techniques for collaborative activities with humans, involving multiple concurrent tasks. Conversational context for multiple concurrent activities is represented using a “Dialogue Move Tree” and an “Activity Tree” which support multiple interleaved threads of dialogue about different activities and their execution status. We(More)
We explore the problem of resolving the second person English pronoun you in multi-party dialogue, using a combination of linguistic and visual features. First, we distinguish generic and referential uses, then we classify the referential uses as either plural or singular, and finally, for the latter cases, we identify the addressee. In our first set of(More)