File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/relat/06/p06-1026_relat.xml
Size: 3,426 bytes
Last Modified: 2025-10-06 14:15:51
<?xml version="1.0" standalone="yes"?> <Paper uid="P06-1026"> <Title>Learning the Structure of Task-driven Human-Human Dialogs</Title> <Section position="5" start_page="201" end_page="201" type="relat"> <SectionTitle> 3 Related Work </SectionTitle> <Paragraph position="0"> In this paper, we discuss methods for automatically creating models of dialog structure using dialog act and task/subtask information. Relevant related work includes research on automatic dialog act tagging and stochastic dialog management, and on building hierarchical models of plans using task/subtask information.</Paragraph> <Paragraph position="1"> There has been considerable research on statistical dialog act tagging (Core, 1998; Jurafsky et al., 1998; Poesio and Mikheev, 1998; Samuel et al., 1998; Stolcke et al., 2000; Hastie et al., 2002). Several disambiguation methods (n-gram models, hidden Markov models, maximum entropy models) that include a variety of features (cue phrases, speaker ID, word n-grams, prosodic features, syntactic features, dialog history) have been used. In this paper, we show that use of extended context gives improved results for this task.</Paragraph> <Paragraph position="2"> Approaches to dialog management include AI-style plan recognition-based approaches (e.g.</Paragraph> <Paragraph position="3"> (Sidner, 1985; Litman and Allen, 1987; Rich and Sidner, 1997; Carberry, 2001; Bohus and Rudnicky, 2003)) and information state-based approaches (e.g. (Larsson et al., 1999; Bos et al., 2003; Lemon and Gruenstein, 2004)). In recent years, there has been considerable research on how to automatically learn models of both types from data. Researchers who treat dialog as a sequence of information states have used reinforcement learning and/or Markov decision processes to build stochastic models for dialog management that are evaluated by means of dialog simulations (Levin and Pieraccini, 1997; Schef er and Young, 2002; Singh et al., 2002; Williams et al., 2005; Henderson et al., 2005; Frampton and Lemon, 2005). Most recently, Henderson et al. showed that it is possible to automatically learn good dialog management strategies from automatically labeled data over a large potential space of dialog states (Henderson et al., 2005); and Frampton and Lemon showed that the use of context information (the user's last dialog act) can improve the performance of learned strategies (Frampton and Lemon, 2005). In this paper, we combine the use of automatically labeled data and extended context for automatic dialog modeling.</Paragraph> <Paragraph position="4"> Other researchers have looked at probabilistic models for plan recognition such as extensions of Hidden Markov Models (Bui, 2003) and probabilistic context-free grammars (Alexandersson and Reithinger, 1997; Pynadath and Wellman, 2000).</Paragraph> <Paragraph position="5"> In this paper, we compare hierarchical grammarstyle and at chunking-style models of dialog.</Paragraph> <Paragraph position="6"> In recent research, Hardy (2004) used a large corpus of transcribed and annotated telephone conversations to develop the Amities dialog system. For their dialog manager, they trained separate task and dialog act classi ers on this corpus. For task identi cation they report an accuracy of 85% (true task is one of the top 2 results returned by the classi er); for dialog act tagging they report 86% accuracy.</Paragraph> </Section> class="xml-element"></Paper>