File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/06/p06-1024_abstr.xml
Size: 1,617 bytes
Last Modified: 2025-10-06 13:44:59
<?xml version="1.0" standalone="yes"?> <Paper uid="P06-1024"> <Title>Learning More Effective Dialogue Strategies Using Limited Dialogue Move Features</Title> <Section position="2" start_page="0" end_page="0" type="abstr"> <SectionTitle> Abstract </SectionTitle> <Paragraph position="0"> We explore the use of restricted dialogue contexts in reinforcement learning (RL) of effective dialogue strategies for information seeking spoken dialogue systems (e.g. COMMUNICATOR (Walker et al., 2001)). The contexts we use are richer than previous research in this area, e.g.</Paragraph> <Paragraph position="1"> (Levin and Pieraccini, 1997; Schef er and Young, 2001; Singh et al., 2002; Pietquin, 2004), which use only slot-based information, but are much less complex than the full dialogue Information States explored in (Henderson et al., 2005), for which tractabe learning is an issue. We explore how incrementally adding richer features allows learning of more effective dialogue strategies. We use 2 user simulations learned from COMMUNICATOR data (Walker et al., 2001; Georgila et al., 2005b) to explore the effects of different features on learned dialogue strategies. Our results show that adding the dialogue moves of the last system and user turns increases the average reward of the automatically learned strategies by 65.9% over the original (hand-coded) COMMUNICATOR systems, and by 7.8% over a base-line RL policy that uses only slot-status features. We show that the learned strategies exhibit an emergent focus switching strategy and effective use of the 'give help' action.</Paragraph> </Section> class="xml-element"></Paper>