File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/86/c86-1006_metho.xml

Size: 21,551 bytes

Last Modified: 2025-10-06 14:11:48

<?xml version="1.0" standalone="yes"?>
<Paper uid="C86-1006">
  <Title>USER MODELS: THE PROBLEM OF DISPARITY</Title>
  <Section position="6" start_page="30" end_page="30" type="metho">
    <SectionTitle>
4. PROBLEM POSED BY DISPARATE MODELS
</SectionTitle>
    <Paragraph position="0"> Grosz\[1981\] claimed that communication can proceed smoothly only if both dialogue participants are focused on the same subset of knowledge. Extending this to inferred plans, we claim that communication is most successful when the informatlon-provider's and information-seeker's models mirror one another. But clearly it is unrealistic to expect that these models will never diverge, given the different knowledge bases of the two participants and the imperfections of communication via dialogue.</Paragraph>
    <Paragraph position="1"> Thus the information-provider (IP) and the information-seeker (IS) must be able to detect inconsistencies in the models whenever possible and repair them. Clearly a natural language system must do the same.</Paragraph>
    <Paragraph position="2"> This view is supported by the work of Pollack, Hirsehberg, and Webber\[1982\]. They conducted a study of naturally occurring expert-novice dialogues and suggested that such interaction could be viewed as a negotiation process, during which not only an acceptable solution is negotiated but also understanding of the terminology and the beliefs of the participants. The context model is one component of IP's beliefs, as is her belief that it accurately reflects the plan under construction by IS.</Paragraph>
  </Section>
  <Section position="7" start_page="30" end_page="30" type="metho">
    <SectionTitle>
5. AN APPROACH TO DISPARATE MODELS
</SectionTitle>
    <Paragraph position="0"> A study of transcripts of naturally occurring information-seeking dialogues indicates that humans often employ a four phase approach in detecting and recovering from disparate plan structures. Therefore a natural language interface that pursues the same strategy will be viewed as acting naturally by human users. The next sections discuss each of these phases.</Paragraph>
    <Paragraph position="1"> 5.1. DETECTION AND HYPOTHESIS FORMATION As claimed earlier, since IP is presumed to be a cooperative dialogue participant, IP must be on the lookout for plan disparity. We have identified three sources of clues to the existence of such disparity: \[1\] the discourse goals of IS, such as expressing surprise or confusion null \[2\] relevance of ISis current utterence to IP's inferred model \[31 focus of attention in the model IS can express surprise or confusion about IP's response, thereby cuing the possibility of plan disparity. Consider for example the dialogue presented in Figure 1. This dialogue was transcribed from a radio talk show on investments~and will be referred to as the &amp;quot;IRA example&amp;quot;; utterances are numbered for later reference. Plan disparity is suggested when IS, in utterance \[5\], expresses confusion at IP's previous response.</Paragraph>
    <Paragraph position="2"> On the other hand, IS's query may contradict or appear irrelevant to what IP believes is IS's overall task, leading IP to suspect that her context model may not reflect IS's plan. Or IS's ~-~:r~;;~\[,tfo-f-ih~;:-d\]~\[o-gu-es were provided by the Department of Computer Science of the University of Pennsylvania</Paragraph>
  </Section>
  <Section position="8" start_page="30" end_page="30" type="metho">
    <SectionTitle>
\[1\] IS:
\[2\] IP:
\[3\] IS:
\[41 IP:
\[5\] IS:
\[6\] IP:
\[7\] IS:
\[81 IP:
</SectionTitle>
    <Paragraph position="0"> &amp;quot;I'm ~ retired government employee but I'm still working. I'd like to start out an IRA for myself mid my wife --- she doesn't work.&amp;quot; &amp;quot;Did you work outside of the government last year?&amp;quot; &amp;quot;Yes I did.&amp;quot; &amp;quot;There's no reason why you shouldn't have an IRA for last year.&amp;quot; &amp;quot;I thought they just started this year.&amp;quot; &amp;quot;Oh no. IRA's were available as long as you are not a participant in an existing pension.&amp;quot; &amp;quot;Well, I do work for a company that has a pension.&amp;quot;  speakers to address aspects of the task closely related to the current focus of attention \[Sidner 1981, McKeown 1985, Carberry 1983\]. The dialogue presented in Figure 2, and henceforth referred to as the &amp;quot;Kennit example&amp;quot;, illustrates a toque in which plan disparity is suggested by an abrupt shift in focus of attention. Upon completion of utterance \[4\], IP's model of IS's plan might be represented as  pose that IP does not know how to purchase floppy disks. Then from IP's limited knowledge, IS's next query, &amp;quot;How late is the University Bookstore open?&amp;quot; will not appear to address an aspect of the plan inferred for IS, or any expansion of it. IP could just respond by \[1\] answering the direct question~ if possible, ignoring its ramifications \[2\] responding &amp;quot;I don~t know&amp;quot;, if the direct answer is not available null However cooperative human information-providers are expected to try to understand the import of a query and provide as cooperative a response ~ they can.</Paragraph>
    <Paragraph position="1"> Griee's maxim of relation \[Grice 1975\] suggests that IS believes the query to be relevant to the overall dialogue. Several possibilities exist. IS may be shifting focus to some aspect of a higher-level task that incindes transferring files as a subaction. One such higher-level task might be to compose a document using the SCRIBE text formatting system~ and the aspect queried by the new uttere~me might be the purchase of a SCRIBE manual from the univemity bookstore; in this ease, the subtask of the overall task represented by the existing context model might be .............................</Paragraph>
    <Paragraph position="2"> Minor alterations have been made to the dialogue to remove restarts and extraneous phrasing.</Paragraph>
    <Paragraph position="3"> \[!\] IS: &amp;quot;I wish I could transfer files between the Vax and my PC.&amp;quot;  I2\] IP: &amp;quot;Kermit lets you do that.&amp;quot; \[3\] IS: &amp;quot;How do I get Kemfit?&amp;quot; \[4\] IP: &amp;quot;The computing center will give you a copy if you bring them a floppy disk.&amp;quot; \[5\] IS: &amp;quot;How late is the University Bookstore open?&amp;quot;  the transfer of files containing the document so that they can be modified using a PC editor.</Paragraph>
    <Paragraph position="4"> On the other hand~ focusing heuristics and the absence of discourse rrmrkers \[Sidner 1985\] suggest that the new query is most likely to be relevant to the current focus of attention. So IP should begin trying to determine how IS's utterance mght relate to the currently focused subtask in tim context model, and consider the possibility that IS's domain knowledge might exceed IP's or irfight be erroneous.</Paragraph>
  </Section>
  <Section position="9" start_page="30" end_page="33" type="metho">
    <SectionTitle>
5.2. RESPONSE PHASE
</SectionTitle>
    <Paragraph position="0"> Webber\[1986\] distinguishes between answers and responses.</Paragraph>
    <Paragraph position="1"> She defines an answer as the production of the information or execution of the action requested by the speaker but a response ~s &amp;quot;tile rcspondent's complete informative and performstire reaction to the question which can include ... additional information provided or actions performed that are salient to this substitute for an answer.&amp;quot; Our analysis of naturally oecurring dialogue indicates that humans respond, rather than answer, once disparate models are detected. Ttmse responses often entail additional actions, including a negotiation dialogue to ascertain the cause of the discrepancy and enable the models to be modified so that they are once again in alignment. A robust natural language interface must do the same, since the system must have an accurate model of the information-seeker's plan in order for cooperative behavior to resume.</Paragraph>
    <Paragraph position="2"> The appropriate response depends on the cause of the discrepancies. In the case of a knowledge-limited model, IP should attempt to understand IS's uttermme in terms of IP'8 limited knowledge ~ld provide any pertinent helpful information, but inform IS of these limitations in order to avoid misleading IS by appearing to implicitly support his task-related plan.</Paragraph>
    <Paragraph position="3"> Consider again our exmnple of file transfer via Kermit, presented in Figure 2. We assume that, in addition to a domain-dependent set of plans, IP's knowledge base contains a generalization hierarchy of actions and entities.</Paragraph>
    <Paragraph position="4"> Suppose that IP's knowledge base contains the plans  as embodied in our TRACK system \[Carberry 1983\]. However IP cannot connect purchasing a book with her model of IS. So IP may begin trying to expand on her knowledge. Suppose that IP's taxonomy of objects is as shown in Figure 3 and that IP's domain knowledge includes the existence of many instances of &lt;u&gt;, &lt;v&gt;, &lt;w&gt;, and &lt;x&gt; such that</Paragraph>
    <Paragraph position="6"> Novels are a subclass of light-books and~ technical-books, nontechnical-books, and textbooks are subclasses of educationalbooks. But educational-books are a subclass of educatlonal-useitems, as are floppy disks. Thus IP can generalize textbooks to educational-use-ltems, note that this class also contains disks, and then hypothesize that perhaps IS thinks that the bookstore sells floppy disks~ since it sells other educational-use items. This reasoning might be represented by the rule  This rule can be applied in the absence of contradictory domain knowledge. Having thus hypothesized that perhaps</Paragraph>
    <Paragraph position="8"> the last of which is a component of IP's model of IS.</Paragraph>
    <Paragraph position="9"> Since IP has constructed a plan that may reasonably be ascribed to IS, is relevant to the current focus of attention~ and about which IP's knowledge is neutral, IP can hypothesize that the cause of the plan disparity may be that IS has more extensivc domain knowledge. IP can now respond to IS. This reply should of course contain a direct answer to IS's posited question. But this alone is insufficient. In a cooperative information-seeklng dialogue, IS expects IP to assimilate the dialogue and relate utterances to IS's inferred underlying task in order to provide the most helpful information. If IP limits herself to a direct response, IS may infer that IP has related IS's current utterance to this task and that IP~, knowledge supports it --- that is, that IP also believes IS can purchase a floppy disk at the bookstore. Joshi's revised maxim of quality \[Joehl 1983\] asserts that IP's response must block false inferences. In addition, as a helpful participant, IP should include whatever evidence IP has for or against the pla~x component proposed by IS. An appropriate response wouhl be: &amp;quot;The University Bookstore is open until 4:30 PM. But I don't know whether it sells floppy disks. However it does sell many other items of an educational nature, so it is perhaps a good place to try.&amp;quot; The above example concerned a knowledge-limited model caused by IP's limited domain knowledge. Other kinds of models suggest different reasoning and response strategies. If IP has failed to nm~e the inferences IS assumed would be made, then subsequent utter*races by IS may appear appropriate to a more specific model than IP's current modeh Earlier, we referred to this class as overly-generalized models. In these cases, IP amy enter a clarification dialogue to ~certaln what IS intends.</Paragraph>
    <Paragraph position="10"> In other cases, such as when overly-specialized or erroneous models are detected, a negotiation dialogue must be initiated to &amp;quot;square away&amp;quot; \[Joshi 1983\] the modeis; otherwise, IS will lack confidence in the responds provided by IP (and therefore should not continue the dialogue), and IP will lack confidence in her ability to provide useful replies (and therefore cannot continue as a cooperative participant). As with any negotiation, this is a two-way process: \[1\] IP may select portions of the context model that she feels are suspect and justify them~ in an attempt to convince IS that IS's plan needs adjustment, not IP's inferred model of that plan.</Paragraph>
    <Paragraph position="11"> \[2\] IP may formulate queries to IS in order to ascertain why the task models diverge and where IP's model might be in error.</Paragraph>
    <Paragraph position="12"> The IRA example illustrates a negotiation dialogue. In utterance \[6\], IP selects a suspect component of her context model and provides justification for it. IS's next utterance informs IP that the assumption on which this component was based is incorrect; IP then notifies IS that IP recognizes the error and that her context model has been repaired. The information-seeking dialogue then resumes.</Paragraph>
    <Paragraph position="13">  this depends o~ the cause of the disagreement. In the case of a knowledge-limited model, IP should hmorporate the components she believes to be part of IS's plan structure into her context model, noting however that her own knowledge oilers only liafited support for thr.m. In this way, IP's model reflects IS's, enables IP to understand (within her limited knowh!dge) how IS plazm to accomplish his objectives, and permits IP to use this knowledge to understand subsequent utterances and provide helpful information. null If IP's m(~lel is in error~ she must alter her context model, as determined through the negotiation dialogue. She may also communicate to IS the changes that she is making, so that IS can assure himself that the models now agree. On the other hand, if IS's model is in error, IP may inform IS of any information neee~ sary for 1S to construct an appropriate plan and achieve his goals. g.4. SUMMAItY= The argunmnts in the preceding sections are based on an analysis of transcripts of hunm~l information-seeking dialogues and indicate that au appropriate approach for hazldling the plan disparity problem entails four phases: \[1\] detection of disparate mc)dels \[2\] hypothesis for:marion as to the cause of the disparities \[3\] extended response, often including a negotiation dialogue to identify the cause of the disparities \[4\] model modification, to &amp;quot;square away&amp;quot; the plm~ structures. Since this appre~mh is representative of that employed by human dialogue partlcipants, a natural language interface that pursues the s~nne strugegy will be viewed as acting naturally by its human users.</Paragraph>
    <Paragraph position="14"> O. ENRICHED CONTEXT MODEL The knowledge acquired from the dialogue and how it was used to constrt~ct the context model are important factors in detecting, responding to, and recovering from disparate models. l\[tumazl dialogue participants typically employ various teclmiques such as focusing strategies and default rules for understanding a~xd relating dialogue, but they appear to have greater confidence in some parts of the resultant model than others. Natural language systems mnst employ similar mechanisms in order to do the kind of inferencing expected by humans and provide the most helpful responses. We claim that the representation of the inferred plan must differentiate among its components according to the support which the system accords each component as a correct and intended part of the inferred plan. This view parallels Doyle's Truth Maintenance System \[Doyle 1979\], in which attitudes are associated with reasons justifying them.</Paragraph>
    <Paragraph position="15"> We see font kinds of support for plan components: \[1\] whether the system has inferred the component directly from what IS said.</Paragraph>
    <Paragraph position="16"> \[2\] whether the system has inferred the component on the basis of its own domain knowledge, which the system eamlot be cerLain IS i~s aware of.</Paragraph>
    <Paragraph position="17"> \[3\] the kinds of k~mehanismu used to add each component to the model, (for example, default rules that select one component from among several possibilities, or heuristics that suggest a shift in f(~:us of attention), and the evidence for applying the mechar~ism.</Paragraph>
    <Paragraph position="18"> \[41 whether the system's domain knowledge supports, contradicts, or is :neutral regarding inclusion of the component as part of a correct overall plan.</Paragraph>
    <Paragraph position="19"> The first three are importmlt factors in formulating a hypothesis regarding the source of disparity between the system's model and IS's plmL If the system believes that IS intends the system to recognize from IS's utterance that G is a component of IS's plan, then the system can add G to its context model and have the greatest faith that it really is a component of IS's plan. Therefore such components are unlikely sources of disparity between the system's context model and IS's plan.</Paragraph>
    <Paragraph position="20"> Components that the system adds to the context model on the basis of its donmin knowledge will be strongly believed by the system to be part of IS's plan, bnt not as much as if IS had directly coatmunicated them. Ttmse components resemble &amp;quot;keyhole recognition&amp;quot; rather thml &amp;quot;intended recognition&amp;quot; \[Sidner 1985, 1983\]. Since IS amy not have intended to eonnnunieate them, they are more likely r~ources of error tha~l components which IS intended IP to recognize.</Paragraph>
    <Paragraph position="21"> Consider for example a student advisement system. If only BA degrees have a foreign lar~guage requirement, the query &amp;quot;What course must I take to satisfy the foreign language requirement in French?&amp;quot; may lead the system to infer that IS is pursuing a Bachelor of Arts degree. If only BS degrees require a senior project, then a subsequent query such as &amp;quot;Ilow many credits of senior project are required?&amp;quot; suggests plan disparity. Either the second query is inappropriate to IS's overall goal \[Pollack 1986\] or the system's context model is already in error. Since the component Obtain-Degree(IS, BACHELOR-OF-ARTS) was inferred on the basis of the system's domain knowledge rather titan directly from IS's utterance, it is suspect as the source of error.</Paragraph>
    <Paragraph position="22"> The mechanisms u2~ed to add a component to the context model affect IP's faith in that component as part of ISis overall plan. Consider again the IRA example in Figure 1. in utterance I4\], IP has applied the default assumption that IS was not covered by a pension progrmn during the year in question (at that tim% rules on IRAs were different). IS's next utterance expresses confusion at IP's response, thereby cuing the possibility of plan disparity. In utterance \[61, IP selects the component added to the context model via. the default assumption as a possible source of the disparity, tells IS that it is part of IP's context model, and attempts to justify its inclusion.</Paragraph>
    <Paragraph position="23"> Analysis of naturally occurring dialogues such as that in Figure 1 indicate that humans use mechanisms such as defanlt infercnee rules and focusing heuristics to expand the context model and provide a more detailed and tractable arena in which to understand and respond to subsequent utterances. Natural language systems must use similar mechanisms in order to cooperatively and naturally engage in dialogue with humans.</Paragraph>
    <Paragraph position="24"> IIowever these rules select from among multiple possibilities and therefore produce components that are more likely sources of error than components added as a result of IS's direct statements or inferences made on the basis of the system's domain knowledge.</Paragraph>
    <Paragraph position="25"> The fourth kind of differentiation among components --whether the system's domain knowledge supports, contradicts, or is neutral regarding inclusion of the component as part of a correct overall plan -- is important in recovering from disparate plans. Even an expert system has limited domain knowledge.</Paragraph>
    <Paragraph position="26"> Furthermore, in a rapidly eh~mging world, knowledgeable users may have more accurate information about some aspects of the domain than does the system. For example, a student advisement system may not be altered intmediately upon changing the teacher of a course. Thus we believe that the context model must allow for inclusion of components suggested by the informatiomseeker, including whether the system's domain knowledge contradicts, supports, or is neutral regarding the component.</Paragraph>
    <Paragraph position="27">  For example, upon determining that IS's domain knowledge may exceed the system's in the Kermit dialogue, the system should expand its existing model to incorporate the acquired knowledge about how IS believes floppy disks can be obtained.</Paragraph>
    <Paragraph position="28"> The plan components creatively constructed can be added to the system's model, but as components proposed by IS and not fully supported by the system's knowledge. In this manner, the system can assimilate new utterances that exceed or contradict its limited domain knowledge and develop an expanded context model which serves as &amp;quot;knowledge&amp;quot; that can be referred back to in the ensuing dialogue.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML