File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/87/p87-1030_intro.xml
Size: 4,941 bytes
Last Modified: 2025-10-06 14:04:38
<?xml version="1.0" standalone="yes"?> <Paper uid="P87-1030"> <Title>A Model For Generating Better Explanations</Title> <Section position="2" start_page="0" end_page="215" type="intro"> <SectionTitle> 1. Introduction </SectionTitle> <Paragraph position="0"> What constitutes a good response? There is general agreement that a correct, direct response to a question may, under certain circumstances, be inadequate. Previous work has emphasized that a good response should be formulated in light of the user's immediate goals and plans as inferred from the utterance (or discourse segment). Thus, a good response may also have to (i) assure the user that his underlying goal was considered in arriving at the response (McKeown, Wish, and Matthews 1985); (ii) answer a query that results from an inappropriate plan indirectly by responding to the underlying goal of the query (Pollack 1986); (iii) provide additional information aimed at preventing the&quot; user from drawing false conclusions because of violated expectations of how an expert would respond (Joshi, Webber, and Weischedel 1984a, 1984b).</Paragraph> <Paragraph position="1"> The claim of this paper is that a cooperative response can (and should) also address a user's overall goals, plans, and preferences among those goals and plans.</Paragraph> <Paragraph position="2"> We wish to show that an advice seeker may also expect the expert to respond in light of, not only the immediate goals and plans of the user as expressed in a query, but also in light of (i) previously expressed goals or preferences, (ii) goals that may be inferred or known from the user's background, and (iii) domain goals the user may be expected to hold. If the expert's response does not consider these latter type of goals the result may mislead or confuse the user and, at the least, will not be cooperative.</Paragraph> <Paragraph position="3"> As one example, consider the following exchange between a student and student-advisor system.</Paragraph> <Paragraph position="4"> User: Can I enroll in CS 375 (Numerical Analysis)? System: Yes, but CS 375 does involve a lot of FORTRAN programming. You may find Eng 353 (Technical Writing) and CS 327 (AI) to be useful courses.</Paragraph> <Paragraph position="5"> The user hopes to enroll in a particular course to help fulfill his elective requirements. But imagine that in the past the student has told the advisor that he has strong feelings about not using FORTRAN as a programming language. If the student-advisor gives the simple response of &quot;Yes&quot; and the student subsequently enrolls in the course and finds out that it involves heavy doses of FORTRAN programming, the student will probably have justifiably bad feelings about the studentadvisor. The better response shown takes into account what is known about the user's preferences. Thus the system must check if the user's plan as expressed in his query is compatible with previously expressed goals of the user. The system can be additionally cooperative by offering alternatives that are compatible with the user's preferences and also help towards the user's intended goal of choosing an elective (see response).</Paragraph> <Paragraph position="6"> Our work should be seen as an extension of the approach of Joshi, Webber, and Weischedel (1984a, 1984b; hereafter referred to as Joshi). Joshi's approach, however, involves only the stated and intended (or underlying) goal of the query, which, as the above example illustrates, can be inadequate for avoiding misleading responses. Further, a major claim of Joshi is that a system must recognize when a user's plan (as expressed in a query) is sub-optimal and provide a better alternative. However, Joshi leaves unspecified how this could be done. We present an algorithm that produces good responses by abstractly reasoning about the overall goals and plans hypothesized of a user. An explicit model of the user is maintained to track the goals, plans, and preferences of the user and also to record some of the background of the user pertinent to the domain. Together these provide a more general, extended method of computing non-misleading responses. Along with new cases where a response must be modified to not be misleading, we show how the cases enumerated in (Joshi 1984a) can be effectively computed given the model of the user. We also show how the user model allows us to compare alternatives and select the better one, all with regards to a specific user, and how the algorithm allows the responses to be computed in a domain independent manner. In summar),, computing a response requires, among other things, the ability to provide a correct, direct answer to a query; explain the failure of a query; compute better alternatives to a user's plan as expressed in a query; and recognize when a direct response should be modified and make the appropriate modification.</Paragraph> </Section> class="xml-element"></Paper>