File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/92/h92-1006_intro.xml

Size: 1,390 bytes

Last Modified: 2025-10-06 14:05:19

<?xml version="1.0" standalone="yes"?>
<Paper uid="H92-1006">
  <Title>SUBJECT-BASED EVALUATION MEASURES FOR INTERACTIVE SPOKEN LANGUAGE SYSTEMS</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
ABSTRACT
</SectionTitle>
    <Paragraph position="0"> The DARPA Spoken Language effort has profited greatly from its emphasis on tasks and common evaluation metrics. Common, standardized evaluation procedures have helped the community to focus research effort, to measure progress, and to encourage communication among participating sites. The task and the evaluation metrics, however, must be consistent with the goals of the Spoken Language program, namely interactive problem solving. Our evaluation methods have evolved with the technology, moving from evaluation of read speech from a fixed corpus through evaluation of isolated canned sentences to evaluation of spontaneous speech in context in a canned corpus. A key component missed in current evaluations is the role of subject interaction with the system.</Paragraph>
    <Paragraph position="1"> Because of the great variability across subjects, however, it is necessary to use either a large number of subjects or a within-subject design. This paper proposes a within-subject design comparing the results of a software-sharing exercise carried out jointly by M1T and SRI.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML