File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/86/c86-1006_intro.xml
Size: 3,573 bytes
Last Modified: 2025-10-06 14:04:33
<?xml version="1.0" standalone="yes"?> <Paper uid="C86-1006"> <Title>USER MODELS: THE PROBLEM OF DISPARITY</Title> <Section position="4" start_page="0" end_page="29" type="intro"> <SectionTitle> 2. TYPES OF MODELS </SectionTitle> <Paragraph position="0"> An information-seeking dialogue contains two participauts, one seeking hfformation and the other attempting to provide that information. Underlying such a dialogue is a task which the information-seeker wants to perform, generally at some time in the future. The information-seeker poses queries in order to obtain the information necessary to construct a plan for accomplishing this task. Examples of such tasks include pursuing a program of study in a university domain, treating a patient in a medical domain, and taking a vacation in a travel domain.</Paragraph> <Paragraph position="1"> A cooperative natural hmguage system must attempt to infer the underlying task-related plan motivating the information-seeker's queries mad use this plan to provide cooperative, helpful responses \[Carberry 1983, 1985\]. We call the system's model of this plan a context model. A context model is one component of a user model.</Paragraph> <Paragraph position="2"> We are concerned here with cases in which the system's context model fails to mirror the plan under construction by the information-seeker. Disparate plan models may be classified according to how the model inferred by the system differs from the information-seeker's model of his task: \[1\] erroneous models, representing eases in which the model inferred by the system is inconsistent with the information-seeker's model. If the information-seeker were to examine the system's model in such cases, he would regard it as containing errors.</Paragraph> <Paragraph position="3"> \[2\] overly-speclalized models, representing cases in which the model inferred by the system is more restricted than that intended by the information-seeker.</Paragraph> <Paragraph position="4"> \[3\] overly-generalized models, representing cases in which the model inferred by the system is less specific than that intended by the information-seeker.</Paragraph> <Paragraph position="5"> \[4\] knowledge-liraited models~ representing cases in which the model inferred by the system fails to mirror the plan under construction by the information-seeker, due to the system's limited domain knowledge.</Paragraph> <Paragraph position="6"> The use of default inferencing rules may produce erroneous or overly-specialized models. Erroneous models may also result if the informatlon-seeker's statements are inaccurate or misleading or if the system uses focusing heuristics to relate new utterances to the existing plan context. Overly-generalized models may result if the information-seeker fails to adequately communicate his intentions (or the system fails to recognize these intentions). Knowledgelimited models may result if the information-seeker's domain knowledge exceeds that of the system.</Paragraph> <Paragraph position="7"> A fifth category, partial models, represents cases in which the system has inferred only part of the information-seeker's plan; subsequent dialogue will enable the system to further expand and refine this context model as more of the information-seeker's intentions are communicated. We do not regard partial models as disparate structures: were the informatlon-seeker to examine the system's inferred partial plan, he would regard it as correctly modeling his intentions as communicated in the dialogue thus far.</Paragraph> </Section> class="xml-element"></Paper>