File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/97/w97-0602_abstr.xml

Size: 1,261 bytes

Last Modified: 2025-10-06 13:49:04

<?xml version="1.0" standalone="yes"?>
<Paper uid="W97-0602">
  <Title>Feature A B C D E F G</Title>
  <Section position="2" start_page="0" end_page="0" type="abstr">
    <SectionTitle>
1 Background
</SectionTitle>
    <Paragraph position="0"> Dialogue management systems, particularly those which replace a graphical user interface with a spoken language one, have become increasingly popular. Speech recognition is gradually becoming robust place, and because of this many companies are realising the value of a spoken interface to their products and services. The research community pro7&amp;quot; vides a number of methodologies to the representation of dialogue and its implementation on a computer. Correspondingly, there are a number of design methodologies for building such a system. Despite there many differences, every one contains a common process: an evaluative cycle. Evaluating a dialogue management system is a difficult and often subjective experience. Whilst it is possible to objectively measure recognition performance, evaluation of a dialogue is not as straightforward. Even those systems which exhibit appalling speech recognition performance can nevertheless lead to &amp;quot;successful&amp;quot; dialogues. null</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML