File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/93/h93-1033_intro.xml

Size: 3,533 bytes

Last Modified: 2025-10-06 14:05:29

<?xml version="1.0" standalone="yes"?>
<Paper uid="H93-1033">
  <Title>Generic Plan Recognition for Dialogue Systems</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
1. Introduction
</SectionTitle>
    <Paragraph position="0"> Plan recognition is an essential part of any dialogue system.</Paragraph>
    <Paragraph position="1"> Traditional approaches to plan recognition are inadequate in one of two ways. Those that are formally well-specified tend to be highly restricted in the phenomena they can accomodate and are therefore unsuitable for a general purpose dialogue system. On the other hand, the heuristically-motivated systems have been difficult to formalize and hence to understand. In both cases, the representation of plans is insufficient for a collaborative dialogue-based system.</Paragraph>
    <Paragraph position="2"> The research reported here is part of the TRAINS project \[1\].</Paragraph>
    <Paragraph position="3"> The goal of this project is an intelligent planning assistant that is conversationally proficient in natural language. In this paper we concentrate on the plan recognition procedures of the domain plan reasoner component of the system.</Paragraph>
    <Paragraph position="4"> As examples of the phenomena that arise in discourse and affect plan recognition, consider the following utterances gathered from TRAINS dialogues:  1. Utterances that suggest courses of action, e.g., (a) Send engine E3 to Dansville.</Paragraph>
    <Paragraph position="5"> (b) Move the oranges to Avon and unload them.</Paragraph>
    <Paragraph position="6"> This is the prototypical case studied in the literature, and most systems are limited to handling only this case.</Paragraph>
    <Paragraph position="7"> 2. Utterances that identify relevant objects to use, e.g., (a) Let's use engine E3.</Paragraph>
    <Paragraph position="8"> (b) There's an OJ factory at Dansville.</Paragraph>
    <Paragraph position="9">  The second sentence is an example of an indirect suggestion to use the OJ factory.</Paragraph>
    <Paragraph position="10">  3. Utterances that identify relevant constraints, e.g., (a) We must get the oranges there by 3 PM.</Paragraph>
    <Paragraph position="11"> (b) Engine E2 cannot pull more than 3 carloads at a time.</Paragraph>
    <Paragraph position="12"> 4. Utterances that identify relevant lines of inference, e.g., (a) The car will be there because is it attached to engine El.</Paragraph>
    <Paragraph position="13"> 5. Utterances that identify goals of the plan, e.g., (a) We have to make OJ.</Paragraph>
    <Paragraph position="14"> 6. Utterances that introduce complex relations, e.g., purpose clauses such as (a) Use E3 to pick up the car.</Paragraph>
    <Paragraph position="15"> (b) Send engine E3 to Dansville to pick up the oranges.  Our approach to plan reasoning is motivated by examples such as these. It is a generic approach because the details of the algorithms do not depend directly on properties of the underlying knowledge representation. Rather, the approach assumes that certain operations are exported by the underlying reasoner (such as entailment, ~), and it uses these to validate plan reasoning steps.</Paragraph>
    <Paragraph position="16"> We first describe our representation of plans and its connection to the underlying knowledge representation scheme. We then present plan recognition algorithms for the dialogue phenomena and we discuss how they interact with other modules of the system. Finally, we discuss related and future work.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML