File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/relat/88/j88-3003_relat.xml

Size: 7,491 bytes

Last Modified: 2025-10-06 14:16:03

<?xml version="1.0" standalone="yes"?>
<Paper uid="J88-3003">
  <Title>MODELING THE USER'S PLANS AND GOALS</Title>
  <Section position="3" start_page="0" end_page="0" type="relat">
    <SectionTitle>
2 RELATED WORK ON PLAN RECOGNITION
IN DIALOG
</SectionTitle>
    <Paragraph position="0"> Early work in dialog understanding concentrated on apprentice-expert dialogs, during which an expert guided an apprentice in performing a task. Grosz (1977) formulated heuristics for recognizing shifts in focus of attention in the task structure and presented a strategy that used the knowledge currently focused on by the dialog participants to identify the referents of definite noun phrases appearing in an utterance. Robinson (Robinson et al. 1980, Robinson 1981) extended Grosz's work and developed a process model for determining the referents of verb phrases, such as variants of do in the utterance &amp;quot;I've done it.&amp;quot; However, in apprentice-expert dialogs, the overall task that the apprentice is attempting to perform is known at the outset of the dialog, and the ordering of actions and subtasks in the plan being executed strongly influences the dialog between expert and apprentice. This differs from the kind of information-seeking dialogs that we are investigating, in which the information-seeker is attempting to construct a plan for a task that will be executed at a later time. In such dialogs, the information-seeker's utterances are not tightly constrained by the order in which actions in the task will eventually be executed. For example, in an information-seeking dialog between a client and a travel agent, the client may first plan hotel accommodations and theater attendance in New York before inquiring about ways to reach New York, even though travel to New York will occur before attend a New York theater in a temporal ordering of actions in the resultant plan.</Paragraph>
    <Paragraph position="1"> Allen (Allen et al. 1980, Perrault et al. 1980) inferred the goal underlying a speaker's utterance in the context of an information agent in a train setting. This inferred goal was used to account for extra helpful information included in the agent's response, and the inference path connecting the speaker's utterance and the inferred goal was used to interpret indirect speech acts. However, Allen's domain was very restricted; the only domain goals were meeting a train and boarding a train, each of which could be accomplished by a few primitive steps, and his system was primarily concerned with utterances that might occur at the outset of a dialog. In more complex domains, the information-seeker's complete plan will consist of a hierarchy of subplans and subgoals that accomplish his overall goal. Such a complete plan is not immediately evident from a single utterance, and  individual utterances must be related to one another to build the user's plan as the dialog progresses.</Paragraph>
    <Paragraph position="2"> Sidner (1983, 1985) and Litman (1986) developed enhanced models of plan inference. However, both were concerned with dialogs that were initiated in order to begin or continue execution of an underlying task (display of structures on a graphics terminal and meeting/boarding a train), and the dialogs were therefore constrained by the order in which individual actions in the task had to be executed. In addition, Sidner investigated how discourse markers aid in recognizing the speaker's intent, and Litman studied a meta-plan framework for task dialogs.</Paragraph>
    <Paragraph position="3"> Pollack (1986) has recently proposed that plans be viewed as mental phenomena. She contends that, in order to comprehend an utterance and relate it to the user's plan, the system must reason about the configuration of beliefs and intentions that it should ascribe to the speaker. This will be discussed further in Section 5.</Paragraph>
  </Section>
  <Section position="11" start_page="0" end_page="0" type="relat">
    <SectionTitle>
5.1 RELATED WORK
</SectionTitle>
    <Paragraph position="0"> Several research efforts have addressed problems related to plan disparity. Kaplan (1982) and McCoy (1986) investigated misconceptions about domain knowledge and proposed responses intended to remove the misconceptions. However, such misconceptions may not be exhibited when they first influence the information-seeker's plan construction; in such cases, disparate plans may result, and correction will entail both a response correcting the misconception and further processing to bring the system's context model and the plan under construction by the information-seeker back into alignment.</Paragraph>
    <Paragraph position="1"> Allen's plan inference system (Allen et al. 1980) could accommodate some user misconceptions. It did not expressly eliminate invalid plans but instead weighted them less favorably than valid ones. However, his model did not consider how potential user misconceptions might affect the partial plan inferred by the system.</Paragraph>
    <Paragraph position="2"> Pollack (1986) studied removal of the appropriate query assumption of previous planning systems. She proposed a richer model of planning that regarded plans as mental phenomena and explicitly reasoned about the information-seeker's possible beliefs and intentions.</Paragraph>
    <Paragraph position="3"> She addressed the problem of queries that indicated the information-seeker's plan was inappropriate to his over-all goal, and attempted to isolate the erroneous beliefs that led to the inappropriate query. However, queries deemed inappropriate by the system may signal phenomena other than that the query is inappropriate to what the user really wants to do. For example, the information-seeker may have shifted focus to another aspect of the overall task without successfully conveying this to the system, the system's context model may have been in error prior to the query, or, as noted by Pollack (1987), the information-seeker may be addressing aspects of the task outside the system's limited knowledge.</Paragraph>
    <Paragraph position="4"> Pollack was concerned with issues that arise when the information-seeker's plan is incorrect due to a misconception exhibited by the current query. She assumed that, immediately prior to the user making the problematic query, the system's partial model of the user's plan was correct. We argue that since the system's inference mechanisms are not infallible and communication itself is imperfect, the system must contend with the possibility that its inferred model does not accurately reflect the user's plan. Previous research has failed to address this problem.</Paragraph>
    <Paragraph position="5"> Schmidt, Sridharan, and Goodson (1978) proposed a hypothesize-and-revise paradigm for inferring a user's goal by observing his non-communicative actions. They formulated a set of revision critics for altering a plan upon observing actions that conflict with expectations, but failed to provide any principles or mechanism for selecting the appropriate revision. Although the model Computational Linguistics, Volume 14, Number 3, September 1988 35 Sandra Carberry Modeling the User's Plans and Goals of plan recognition for an office environment fornmlated by Carver, Lesser, and McCue (1984) attempted to repair the inferred plan when actions inconsistent with it were observed, it did not reason about where ttle plan might be wrong, but merely backtracked to select another interpretation.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML