File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/06/e06-1037_abstr.xml

Size: 1,044 bytes

Last Modified: 2025-10-06 13:44:49

<?xml version="1.0" standalone="yes"?>
<Paper uid="E06-1037">
  <Title>Using Reinforcement Learning to Build a Better Model of Dialogue State</Title>
  <Section position="1" start_page="0" end_page="0" type="abstr">
    <SectionTitle>
Abstract
</SectionTitle>
    <Paragraph position="0"> Given the growing complexity of tasks that spoken dialogue systems are trying to handle, Reinforcement Learning (RL) has been increasingly used as a way of automatically learning the best policy for a system to make. While most work has focused on generating better policies for a dialogue manager, very little work has been done in using RLto construct abetter dialogue state. This paper presents a RL approach for determining what dialogue features are important to a spoken dialogue tutoring system. Our experiments show that incorporating dialogue factors such as dialogue acts, emotion, repeated concepts and performance play a significant role in tutoring and should be taken into account when designing dialogue systems. null</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML