File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/88/j88-3003_intro.xml
Size: 6,757 bytes
Last Modified: 2025-10-06 14:04:43
<?xml version="1.0" standalone="yes"?> <Paper uid="J88-3003"> <Title>MODELING THE USER'S PLANS AND GOALS</Title> <Section position="2" start_page="0" end_page="0" type="intro"> <SectionTitle> 1 INTRODUCTION </SectionTitle> <Paragraph position="0"> Ideally, a natural language system's responses should contain exactly the information that will be most helpful to the user. But since not all users are alike, achieving such behavior requires that the system have a model of the particular user with whom it is currently interacting.</Paragraph> <Paragraph position="1"> One way of constructing this model is to query the user at the start of the interaction, as was done in the GRUNDY system (Rich 1979). But querying the user may fail to provide an accurate and adequate characterization. Or extensive questioning of the user may be inappropriate if one of the system's goals is that its dialog with the user resemble naturally occurring information-seeking dialogs. In such cases, the system may be able to use the information exchanged during the dialog and its knowledge of the domain to hypothesize a model of the user, and dynamically adjust and expand the model as the dialog progresses.</Paragraph> <Paragraph position="2"> One of the most important components of a user model is the representation of the system's beliefs about the user's plans and goals. As demonstrated by Cohen, Perrault, and Allen (1981), users of question answering systems &quot;expect more than just answers to isolated questions. They expect to engage in a conversation whose coherence is manifested in the interdependence of their often unstated plans and goals with those of the system.&quot; We are interested in a class of information-seeking dialogs in which the information-seeker is attempting to construct a plan for a task. The task is not being performed during the system's interaction with the user, as is the case in apprentice-expert dialogs, but instead is being constructed for future execution. In some cases, only a partial plan will be constructed, with further details filled in later. For example, a freshman accessing an advisement system may only construct part of his plan for earning a degree, leaving other aspects of the plan to be fleshed out in subsequent years. These dialogs are typical of a large percentage of interactions with database management systems, decision support systems, and expert systems. Typical tasks include expanding a company's product line, purchasing a home, or pursuing a degree at a university. One aspect of our research has been to develop a strategy for dynamically constructing a model of an information-seeker's underlying task-related plan from an ongoing dialog and for tracking his focus of attention in this plan. Since the system's beliefs about the user's plans and Copyright 1988 by the Association for Computational Linguistics. Permission to copy without fee all or part of this material is granted provided that the copies are not made for direct commercial advantage and the CL reference and this copyright notice are included on the first page. To copy otherwise, or to republish, requires a fee and/or specific permission. 0362-613X/88/0100o-o$ 03.00 Computational Linguistics, Volume 14, Number 3, September 1988 23 Sandra Carberry Modeling the User's Plans and Goals goals provide a context for understanding subsequent dialog, we will refer to this part of a user model as a context model.</Paragraph> <Paragraph position="3"> Our analysis of naturally occurring dialog indicates that humans understand many utterances that would appear imperfect or incomplete to current natural language systems. For example, a speaker may inadvertently use incorrect or ambiguous terminology in constructing the language representation of his intended query, or he may shortcut its complete specification.</Paragraph> <Paragraph position="4"> But if a system's communication is to be regarded as natural, it must be able to handle the full spectrum of utterances that humans appear to understand with relative ease.</Paragraph> <Paragraph position="5"> Part of this research has been concerned with how a system can reason on its context model to remedy many of a user's faulty utterances and allow the dialog to continue without interruption. We have developed a method, based on Grice's theory of meaning and maxim of relevance, for handling queries that do not conform to the system's model of the world and are therefore regarded as ill-formed. This method uses the task-related plan inferred for an information-seeker to suggest variants of an ill-formed utterance that might represent the information-seeker's intentions or at least satisfy his perceived needs. We have also developed a method for understanding intersentential elliptical utterances that occur during the course of an information-seeking dialogue. Our strategy uses discourse expectations and our model of the speaker' s plan to identify the discourse goal that he is pursuing via an elliptical fragment and to interpret his elliptical utterance relative to his task-related plan. Our work on understanding ellipsis is presented in Carberry (1985).</Paragraph> <Paragraph position="6"> Section 2 of this paper briefly reviews related work in plan recognition, and Section 3 presents our strategy for dynamically inferring a model of the user's plan from an ongoing dialog. Section 4 describes our mechanism for reasoning on this context model to understand a class of utterances that is problematic for current natural language systems. These ideas have been implemented and tested in the IREPS system (Intelligent REsPonse System). This is an ongoing research effort whose objective is a robust natural language interface to information systems. The component that infers the user's underlying task-related plan is called TRACK. To move from one domain to another, only the corpus of domain-dependent plans and goals must be reconstructed; the heuristics and processing strategies are completely transportable.</Paragraph> <Paragraph position="7"> Section 5 describes our current research on relaxing restrictive assumptions present in previous work on plan recognition and developing a more robust plan inference framework. We propose a four-phase approach to handling the problem of possible disparity between the system's context model and the actual plan under construction by the user, and argue for an enriched context model that differentiates among its components according to the support it accords each component as a correct and intended part of the user's plan. Th:roughout this paper, the information-seeker and inforraation-provider will be referred to as IS and IP, respectively.</Paragraph> </Section> class="xml-element"></Paper>