File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/88/j88-3004_metho.xml

Size: 60,578 bytes

Last Modified: 2025-10-06 14:12:11

<?xml version="1.0" standalone="yes"?>
<Paper uid="J88-3004">
  <Title>RECOGNIZING AND RESPONDING TO PLAN-ORIENTED MISCONCEPTIONS</Title>
  <Section position="5" start_page="0" end_page="0" type="metho">
    <SectionTitle>
3.11 REPRESENTING PLANNING RELATIONSHIPS
</SectionTitle>
    <Paragraph position="0"> The planning relation can be one of the relations between actions and states shown below. Here A denotes an action, which is either a primitive operator whose execution results in a set of state changes, or a plan, which is a sequence of these operators. S, S1, and $2 denote states, which are descriptions of properties of objects.</Paragraph>
  </Section>
  <Section position="6" start_page="0" end_page="0" type="metho">
    <SectionTitle>
3 REPRESENTING USER AND ADVISOR BELIEFS
</SectionTitle>
    <Paragraph position="0"> The mistaken user beliefs that we consider involve plan applicability conditions, enablements, and effects. In this section we describe how these beliefs are represented. In essence, we make use of existing frameworks for representing planning knowledge, except that we are careful to distinguish between user and advisor beliefs.</Paragraph>
    <Paragraph position="1"> Traditional planning systems (Fikes and Nilsson 1974, Sacerdoti 1974) represent an agent's planning knowledge as a data base of operators associated with applicability conditions, preconditions, and effects.</Paragraph>
    <Paragraph position="2"> Since these systems have only one agent, the planner, the entries in the data base are implicitly assumed to represent that agent's beliefs. However, because user misconceptions occur when the user's planning knowledge differs from the advisor's, systems that deal with user misconceptions must explicitly distinguish between advisor beliefs about what the user knows and advisor beliefs about what the advisor knows.</Paragraph>
    <Paragraph position="3"> Our representation for beliefs (Abelson 1973, 1979) is similar to that used by existing systems that keep track of the possibly contradictory knowledge of multiple participants (Alvarado 1987; Alvarado, Dyer, and Flowers 1986; Flowers, McGuire, and Birnbaum 1982; Pollack 1986). A belief relation represents an advisor's belief that an actor maintains that a particular plan applicability condition, precondition, or effect holds.</Paragraph>
    <Paragraph position="4"> The actor is either the user or the advisor.</Paragraph>
    <Paragraph position="6"> These relationships A is a correct or normal plan for achieving goal state S A is not a plan for achieving S S1 and $2 cannot exist simultaneously S1 and $2 can exist simultaneously Actor A wants to achieve S are derived from existing representations. SPIRIT's (Pollack 1986) representation for planning knowledge uses gen to represent a state resulting in an action and cgen to represent a state resulting in an action only if some other state exists. Causes and enables are identical in semantics to gen and cgen. Applies, which has no analog in SPIRIT, is similar to the intends relation in BORIS (Dyer 1983). The difference between causes and applies is in whether the action is intended to cause the state that results from its execution to exist. &amp;quot;Causes&amp;quot; represents cause-effect relations which are nonintentional, while &amp;quot;applies&amp;quot; represents a cause-effect relation between an action (sequence) or plan which is intended to achieve a desired state (a goal). An action causes a state whenever the state results from its execution. An action applies to a state when an actor believes the action will cause the desired state to occur.</Paragraph>
    <Paragraph position="7"> 40 Computational Linguistics, Volume 14, Number 3, September 1988 Quilici, Dyer, and Flowers Recognizing and Responding to Plan-Oriented Misconceptions To see why this distinction is necessary, consider two actions that can be used by a user who wants to remove one of his files: typing &amp;quot;rm&amp;quot; followed by the file's name, and typing &amp;quot;rm *&amp;quot;. Both have removing the file as one of their effects, but the latter also causes all other files to be removed as well, an effect that is not the user's goal. Only &amp;quot;rm file&amp;quot; applies to removing a file, although both actions have an effect that causes the file to be removed.</Paragraph>
    <Paragraph position="8"> To further illustrate the semantics of these relations, we show how they can be used to represent the first exchange in our example dialog.</Paragraph>
    <Paragraph position="9"> User: I tried to remove a file with the &amp;quot;rm&amp;quot; command. But the file was not removed and the error message was permission denied. I checked and I own the file. What's wrong? Advisor: To remove a file, you need to be able to write into the directory containing it. You do not need to own the file.</Paragraph>
    <Paragraph position="10"> Three of the user's beliefs in this exchange are: 1. the &amp;quot;rm&amp;quot; command is used when one wants to remove a file; 2. one has to own a file to remove it, and 3. an error message resulted when the plan was executed. In terms of these planning relations, the user's beliefs are: applies(using &amp;quot;rm file&amp;quot;, the file's removal) enables(owning the file, using &amp;quot;rm file&amp;quot;, the file's removal) causes(using &amp;quot;rm&amp;quot; on the user's file, an error message) The advisor holds several similar beliefs, except that he believes that to remove a file it is necessary to have write permission on the directory containing it. In terms of the planning relationships, the advisor's beliefs are: applies(using &amp;quot;rm file&amp;quot;, the file's removal) enables(directory write permission, using &amp;quot;rm&amp;quot;, the file's removal) causes(using &amp;quot;rm&amp;quot; on the user's file, an error message) (The paper is not concerned with representing notions such as &amp;quot;the file's removal&amp;quot; or &amp;quot;write permission on the directory containing the file&amp;quot;. The details of the representation for such things may be found in Quilici (1985).) The user and advisor in this exchange share one belief that we have not represented. This belief is that using &amp;quot;rm&amp;quot; did not cause the user's file to be removed. To represent beliefs that a state did not result from an action, that a plan is not applicable to a goal, or that a state is not an enablement condition of an action having another state as a result, we use !causes, !applies, and /enables, respectively. The belief above is represented with !causes, a belief that &amp;quot;mkdir&amp;quot; is not used to remove a file is represented with !applies, and a belief that &amp;quot;rm&amp;quot; does not require owning the directory containing the file is represented with !enables.</Paragraph>
    <Paragraph position="11"> !causes(using &amp;quot;rm&amp;quot; on the user's file, the file's removal) !applies(using &amp;quot;mkdir file&amp;quot;, the file's removal) !enables(owning the directory, using &amp;quot;rm&amp;quot;, the file's removal) It is also necessary to be able to represent the notion that a state's existence caused a planning failure. Consider the following exchange: User: I accidentally hit the up arrow key and it deleted 20 unanswered mail messages. How can I get them back? Advisor: Hitting the up arrow does not delete your messages, but does result in your being disconnected from the etherplexer. You could not access your mail messages because they were moved to &amp;quot;mbox&amp;quot;. The mail program requires that your mail messages be in &amp;quot;mailbox&amp;quot;.</Paragraph>
    <Paragraph position="12"> Here the advisor believes that the user's mail messages are inaccessible because they are not in the location the mail program expects them to be. The belief that the mail program requires the mail messages to be in the file &amp;quot;mailbox&amp;quot; can be represented using &amp;quot;enables&amp;quot;. The advisor's belief that the mail messages being in the file &amp;quot;mbox&amp;quot; prevents the mail program from accessing is represented with precludes, which captures the notion that two states are mutually exclusive.</Paragraph>
    <Paragraph position="13"> enables(messages in &amp;quot;mailbox&amp;quot;, use &amp;quot;mail&amp;quot;, display messages) precludes(messages in &amp;quot;mbox&amp;quot;, messages in &amp;quot;mailbox&amp;quot;) &amp;quot;precludes&amp;quot; and &amp;quot;!precludes&amp;quot; relations between states can be inferred using rules such as &amp;quot;an object cannot be in two places at once.&amp;quot; The one other relation we find useful is goal, which is used in representing a belief that an actor wants to achieve a particular state. In the example above, the advisor believes that one goal of the user is accessing his mail messages. The advisor's belief is: goal(user, access user's mail messages) Most user modeling systems use a similar relation to explicitly represent that a state is a user's goal.</Paragraph>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
3.2 SUMMARY OF THE REPRESENTATION
</SectionTitle>
      <Paragraph position="0"> The main focus of our work is in trying to detect and respond to user misconceptions. To do so, it is necessary to have some representation for user and advisor planning knowledge. Our representation is based on that used by traditional planning systems. The most important difference is that we take care to distinguish between things the advisor believes and things the advisor thinks the user believes. We also distinguish Computational Linguistics, Volume 14, Number 3, September 1988 41 Quilici, Dyer, and Flowers Recognizing and Responding to Plan-Oriented Misconceptions between actions that are intended to achieve a state and actions that happen to have a particular state as one of their effects. And we find it necessary to represent beliefs that two states cannot exist at the same time and that achieving a particular state is a goal of the user.</Paragraph>
    </Section>
  </Section>
  <Section position="7" start_page="0" end_page="0" type="metho">
    <SectionTitle>
4 EXPLANATION-BASED MISCONCEPTION RECOGNITION
AND RESPONSE
</SectionTitle>
    <Paragraph position="0"> Our approach to recognizing and responding to a potentially incorrect user belief revolves around the advisor trying to do several things. First, the advisor tries to verify that he does not share the user's belief. Next, the advisor tries to confirm that the user's belief is a misconception. The advisor does so by finding an explanation for why he does not share the user's belief.</Paragraph>
    <Paragraph position="1"> After the belief is confirmed as a misconception, the advisor tries to detect its source. He does this by finding a potential explanation for why the user holds that incorrect belief, based on a taxonomy of abstract explanation classes. Finally, the advisor presents these explanations to the user as a cooperative response.</Paragraph>
    <Paragraph position="2"> But what exactly is an explanation? And what knowledge does the advisor need to find one? And finally, how is an explanation found?</Paragraph>
  </Section>
  <Section position="8" start_page="0" end_page="0" type="metho">
    <SectionTitle>
4.1 EXPLANATIONS AS SETS OF BELIEFS
</SectionTitle>
    <Paragraph position="0"> An explanation is a set of advisor beliefs that accounts for why a particular belief is or is not held. An advisor, presented with a potentially incorrect user belief, has to find two explanations.</Paragraph>
    <Paragraph position="1"> The first explanation confirms that the user's belief is a misconception. To find this explanation the advisor tries to find a set of advisor beliefs that justify his not holding the user's belief. For instance, the user in our earlier example had an incorrect belief that owning a directory is a precondition for removing a file.</Paragraph>
    <Paragraph position="2"> enables(own directory, use &amp;quot;rm file&amp;quot;, the file's removal) Two advisor beliefs constitute an explanation for why the advisor does not hold this belief. The first is the advisor's contradictory belief that owning a directory is not a precondition for removing a file. The other is his belief that the actual precondition is write permission on the directory containing the file.</Paragraph>
    <Paragraph position="3"> !enables(own directory, use &amp;quot;rm file&amp;quot;, the file's removal) enables(writeable directory, use &amp;quot;rm file&amp;quot;, the file's removal) These two beliefs confirmed that the user's belief was mistaken.</Paragraph>
    <Paragraph position="4"> The other explanation explains why the user holds this incorrect belief. To find this explanation the advisor tries to find a set of advisor beliefs that capture the source of the user's misconception. Two advisor beliefs provide a possible explanation for the incorrect user belief above. The first is that one has to own a directory to make it writeable. The other is that having a writeable directory is the precondition to removing a file.</Paragraph>
    <Paragraph position="5"> enables(own directory, use &amp;quot;chmod&amp;quot;, obtain writeable directory) enables(writeable directory, use &amp;quot;rm&amp;quot;, the file's removal) The user's not sharing these advisor beliefs explains the user's misconception, which is that the user does not realize that owning a directory is merely a precondition to obtaining write permission on the directory, which is the actual precondition to removing the file.</Paragraph>
  </Section>
  <Section position="9" start_page="0" end_page="0" type="metho">
    <SectionTitle>
4.2 REQUIRED ADVISOR KNOWLEDGE
</SectionTitle>
    <Paragraph position="0"> To find an explanation the advisor must have three types of knowledge: 1. a set of domain-specific beliefs; 2. a set of rules for inferring additional beliefs, and 3. a set of abstract explanation patterns. All of these must come from past advisor experience or past advisor interaction with users. However, here we simply assume their existence and leave understanding how they are obtained for future research.</Paragraph>
    <Paragraph position="1"> The first type of required knowledge is a set of domain-specific beliefs about plan applicability conditions, preconditions, and effects. Examples of these include beliefs that &amp;quot;rm&amp;quot; is used to remove a file, and that it is necessary to have write permission on the directory containing the file. Without these types of beliefs it would be impossible for the advisor to correct user misconceptions about the preconditions for removing a file. This category of knowledge includes beliefs such as a belief that &amp;quot;rm&amp;quot; is not used to remove a directory. These negated beliefs--!applies, !enables, !causes, and so on--are especially useful in detecting misconceptions. An advisor, with the explicit belief that &amp;quot;rm&amp;quot; is not applicable to removing a directory, can trivially detect that a user belief that &amp;quot;rm&amp;quot; is applicable to removing a directory is incorrect.</Paragraph>
    <Paragraph position="2"> These domain-specific beliefs are assumed to derive from past advisor experiences. An advisor who successfully uses &amp;quot;rm&amp;quot; to remove a file will believe that using &amp;quot;rm&amp;quot; is applicable to the goal of removing a file. An advisor who uses &amp;quot;rm&amp;quot; to try to remove a directory and has it fail will believe that &amp;quot;rm&amp;quot; is not applicable to removing a directory. The negated beliefs correspond to the bug lists kept by many tutoring and planning systems (Anderson, Boyle, and Yost 1985; Brown and Burton 1978; Burton 1982; Stevens, Collins, and Goldin 1982).</Paragraph>
    <Paragraph position="3"> The second type of advisor knowledge is a set of rules that help infer negated domain-specific beliefs, such as a belief that a particular action does not result in a particular state, or that a given plan is not useful for a particular goal. These rules are needed because the advisor cannot be expected to have a complete set of these beliefs. One such rule, for example, suggests that 42 Computatiottal Linguistics, Volume 14, Number 3, September 1988 Quilici, Dyer, and Flowers Recognizing and Responding to Plan-Oriented Misconceptions &amp;quot;if a state S is not among the known states that result from an action A's execution, assume that A is not applicable to achieving S.&amp;quot; There are similar rules for the other types of beliefs.</Paragraph>
    <Paragraph position="4"> The third and final type of knowledge is a taxonomy of potential explanations for why an actor might or might not hold a belief. Each type of planning relation-applies, enables, and effects--is associated with two sets of potential explanations. One set provides reasons why an actor might hold a particular belief involving that planning relation. The other set provides reasons why an actor might not.</Paragraph>
    <Paragraph position="5"> The inference rules and potential explanations differ for each type of planning relation. Associated with each type of planning relation is:  1. a set of rules for inferring its negation (which prove useful in finding explanations for why the belief is or is not held), 2. a potential explanation for why an actor does not hold a belief involving that planning relationship, and 3. a set of potential explanations for why an actor does  hold a belief involving that planning relationship.</Paragraph>
    <Paragraph position="6"> For example, applies is associated with a set of rules for inferring that an actor holds a particular !applies belief, a potential explanation for why an actor does not hold a given applies belief, and a set of potential explanations for why an actor does hold a given applies belief.</Paragraph>
  </Section>
  <Section position="10" start_page="0" end_page="0" type="metho">
    <SectionTitle>
5 POTENTIAL EXPLANATIONS
</SectionTitle>
    <Paragraph position="0"> The advisor must be able to find a reason for why a particular belief is or is not held. One way to do so is 1. classify the belief, and 2. try to verify one of the potential explanations associated with that class of belief. A potential explanation is an abstract pattern of planning relationships. The idea is that to verify a potential explanation, the advisor tries to prove, either by memory search or by deductive reasoning, that each of these planning relationships hold.</Paragraph>
    <Paragraph position="1"> There are two types of potential explanations. The first explains why an actor does not hold a belief. The other explains why an actor does. In this section we describe the potential explanations associated with the planning relationships we have examined. The following section discusses in detail how they are used.</Paragraph>
  </Section>
  <Section position="11" start_page="0" end_page="0" type="metho">
    <SectionTitle>
5.1 POTENTIAL EXPLANATIONS FOR NOT HOLDING
A BELIEF
</SectionTitle>
    <Paragraph position="0"> The potential explanations for why the advisor does not hold an instance of one the plan-oriented beliefs are shown below. Each of these potential explanations suggests that to confirm that a user's belief is a misconception, the advisor must try to verify that one of his beliefs contradicts the user's belief, and that one of his beliefs can replace it. The only difference between the potential explanations is in the type of belief being contradicted or replaced.</Paragraph>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
Unshared Potential English
User Belief Explanation Description
</SectionTitle>
      <Paragraph position="0"> applies(Ap, Sg) !applies(Ap,Sg) Plan is not used to achieve Goal applies(A,Sg) Other plan is used to achieve Goal enables(Sp,Ap,Sg) !enables(Sp,Ap,Sg) State is not precondition of Action enables(S,Ap,Sg) Other state is precondition of Action causes(Ap,Sp) !causes(Ap,Sp) Action does not cause state causes(A,Sp) Other action does cause  state Consider our earlier example in which the user's belief is that a precondition of removing a file is owning the directory containing it. The potential explanation suggests trying to prove that the advisor holds two beliefs: that owning a directory is not a precondition of removing a file, and that some other state is. Here, the advisor finds that he believes that owning a directory is not a precondition of removing a file (either by finding that relationship in his knowledge base or by deducing it). The advisor also finds that directory write permission is a precondition of removing a file. These beliefs explain why the advisor does not hold the user's belief, confirming it as a misconception.</Paragraph>
      <Paragraph position="1"> A similar process is used to confirm that the advisor does not hold a user's applies or causes belief. Consider the following exchange: User: I tried to display my file with the &amp;quot;Is&amp;quot; command but it just printed the file's name.</Paragraph>
      <Paragraph position="2"> Advisor: The &amp;quot;Is&amp;quot; command is not used to display the contents of files, the &amp;quot;more&amp;quot; command is. &amp;quot;Is&amp;quot; is used to list the names of your files. The user's potentially incorrect belief is that &amp;quot;Is&amp;quot; is applicable to achieving the goal of displaying a file's contents. The potential explanation for why an advisor does not hold this belief is that the advisor does not believe that using &amp;quot;Is&amp;quot; is applicable to this user's goal, and that using &amp;quot;Is&amp;quot; is applicable to some other goal. So the advisor tries to verify (again, by either search or deduction) that &amp;quot;Is&amp;quot; is not applicable to displaying the file's contents, and he tries to verify that some other plan does. Here the advisor finds that &amp;quot;more&amp;quot; is used instead.</Paragraph>
      <Paragraph position="3"> Finally, consider the following exchange: User: I deleted a file by typing &amp;quot;remove&amp;quot;. Advisor: No, typing &amp;quot;remove&amp;quot; did not delete your file. Typing &amp;quot;rm&amp;quot; deleted it. Typing &amp;quot;remove&amp;quot; cleans up your old mail messages.</Paragraph>
      <Paragraph position="4"> The user's potentially mistaken belief is that typing remove results in a file being deleted. The potential explanation for why the advisor does not share this belief is that the advisor instead believes that typing &amp;quot;remove&amp;quot; does not result in a file being deleted and that some other action does. The advisor verifies that typing &amp;quot;remove&amp;quot; does not cause a file to be deleted and that &amp;quot;rm&amp;quot; is an action that does.</Paragraph>
      <Paragraph position="5"> Computational Linguistics, Volume 14, Number 3, September 1988 43 Quilici, Dyer, and Flowers Recognizing and Responding to Plan-Oriented Misconceptions</Paragraph>
    </Section>
  </Section>
  <Section position="12" start_page="0" end_page="0" type="metho">
    <SectionTitle>
5.2 EXPLANATIONS FOR HOLDING A BELIEF
</SectionTitle>
    <Paragraph position="0"> The potential explanations we have examined so far explain why an actor does not hold a particular belief.</Paragraph>
    <Paragraph position="1"> There are also potential explanations for why an actor does hold an incorrect belief. We now present a taxonomy of these explanations for each of the three types of beliefs.</Paragraph>
  </Section>
  <Section position="13" start_page="0" end_page="0" type="metho">
    <SectionTitle>
5.2.1 EXPLANATIONS FOR INCORRECT APPLIES
</SectionTitle>
    <Paragraph position="0"> There are four potential explanations for why a user holds an incorrect applies belief of the form applies(Ap, Sp). Recall that to recognize that this type of user belief is incorrect the advisor found two beliefs of the form !applies(Ap, Sp) and applies(A, Sp). Here are the potential explanations along with English descriptions for each.</Paragraph>
    <Paragraph position="1">  one of our earlier examples. The explanation is that the user is unaware that his plan does not have an effect that achieves his goal, and that his plan is, in fact, used to achieve some other goal.</Paragraph>
    <Paragraph position="2"> User: I tried to display my file with the &amp;quot;Is&amp;quot; command but it just printed the file's name.</Paragraph>
    <Paragraph position="3"> Advisor: The &amp;quot;Is&amp;quot; command is not used to display the contents of files, the &amp;quot;more&amp;quot; command is. &amp;quot;Is&amp;quot; is used to list the names of your files. The user's incorrect belief that using &amp;quot;Is&amp;quot; displays a file arises because the user is unaware of two things. The first is that using &amp;quot;Is&amp;quot; does not display the contents of files; the other is that &amp;quot;Is&amp;quot; is applicable to listing the names of files.</Paragraph>
    <Paragraph position="4"> The second, &amp;quot;Plan Missing Effect&amp;quot;, suggests that the user is unaware that his plan P1 does not have one of the effects that the plan P2 (that achieves his goal) has.</Paragraph>
    <Paragraph position="5"> User: I tried to remove my directory and I got an error message &amp;quot;directory not empty&amp;quot;. But &amp;quot;Is&amp;quot; didn't list any files.</Paragraph>
    <Paragraph position="6"> Advisor: Use &amp;quot;Is -a&amp;quot; to list all of your files. &amp;quot;Is&amp;quot; cannot be used to list all of your files because &amp;quot;Is&amp;quot; does not list those files whose names begin with a period.</Paragraph>
    <Paragraph position="7"> The user's mistaken belief is that &amp;quot;Is&amp;quot; should be used to list all file names. This belief arises because the user is unaware that &amp;quot;Is&amp;quot; does not have an effect that causes it to list files whose names begin with a period, an effect that the correct plan (Is -a) has.</Paragraph>
    <Paragraph position="8"> The third, &amp;quot;Unachievable Plan Enablement&amp;quot;, suggests that the user is unaware his plan will not work because there is no plan to achieve one of its enablements. null User: So to read Margot's mail, all I have to do is &amp;quot;more flowers/mail&amp;quot;.</Paragraph>
    <Paragraph position="9"> Advisor: No, only &amp;quot;flowers&amp;quot; can read her mail. The user mistakenly believes that his plan of using &amp;quot;more&amp;quot; to examine Margot's mail file will allow him to read her mail. The advisor believes that &amp;quot;more&amp;quot; has an effect of displaying a user's mail, that one of its enablements is that you have to be that particular user, and that no plan has an effect that achieves this enablement. The last, &amp;quot;Plan Thwarts User Goal&amp;quot;, suggests that the user is unaware that another plan achieves the user's goal without an additional effect that the user's plan has.</Paragraph>
    <Paragraph position="10"> User: To list files whose names begin with a number, I pipe &amp;quot;Is&amp;quot; to &amp;quot;grep \[0-9\]&amp;quot;.</Paragraph>
    <Paragraph position="11"> Advisor: Use &amp;quot;Is \[0-9\]*&amp;quot; instead. It is more efficient. The user's mistaken belief is that piping &amp;quot;Is&amp;quot; to &amp;quot;grep&amp;quot; is the most appropriate plan for listing files whose names begin with a digit. The user's misconception arises because he is unaware that the plan of using &amp;quot;Is\[0-9\]*&amp;quot; not only achieves his goal, but also does not thwart his other goal of using his time efficiently.</Paragraph>
  </Section>
  <Section position="14" start_page="0" end_page="0" type="metho">
    <SectionTitle>
5.2.2 EXPLANATIONS FOR AN INCORRECT ENABLES
</SectionTitle>
    <Paragraph position="0"> Just as there are several different sources of user misconceptions about a plan's applicability to a goal, there are also several different sources of user misconceptions about whether a state is a precondition to a plan achieving a goal: that is, a user belief of the form enables(Se, Ap, Sp). Recall that to recognize that this type of belief is incorrect the advisor found two beliefs of the form !enables(Se, Ap, Sp) and enables(S, Ap, Sp). Here are the potential English descriptions for each.</Paragraph>
    <Paragraph position="1"> explanations along with  Quilici, Dyer, and Flowers Recognizing and Responding to Plan-Oriented Misconceptions planation is that the user is unaware that his precondition is not a precondition of the goal itself, but of one of its preconditions.</Paragraph>
    <Paragraph position="2"> User: So to remove a file, I have to own the directory that contains it.</Paragraph>
    <Paragraph position="3"> Advisor: No, to remove a file, you need to have write permission on the directory that contains it. You do not need to own the directory that contains it.</Paragraph>
    <Paragraph position="4"> You need to own that directory when you do not already have write permission on it.</Paragraph>
    <Paragraph position="5"> This user is unaware that owning a directory is a precondition for achieving write permission on it, and that having write permission is a precondition for removing a file.</Paragraph>
    <Paragraph position="6"> The second, &amp;quot;Enablement for One Plan&amp;quot;, suggests that the user is unaware that a plan without his claimed precondition achieves his goal.</Paragraph>
    <Paragraph position="7"> User: So I can only edit files when I'm on a smart terminal? Advisor: Only if you edit with &amp;quot;re&amp;quot;. &amp;quot;vi&amp;quot; works fine on a dumb terminal.</Paragraph>
    <Paragraph position="8"> The user's incorrect belief is that it is necessary to have a smart terminal to edit a file. This belief arises because the user is unaware that only one plan, using &amp;quot;vi&amp;quot;, requires a smart terminal, and that there are other plans that do not.</Paragraph>
    <Paragraph position="9"> The last, &amp;quot;Enablement Too Specific&amp;quot;, suggests that the user is unaware that his precondition is less general than the actual precondition for achieving his goal. User: So I have to remove a file to create a file? Advisor: You do not have to remove a file to create a file. You must have enough free space. Removing a file is only one way to obtain it. You could also ask the system administrator for more space.</Paragraph>
    <Paragraph position="10"> The user mistakenly believes that it is necessary to remove an existing file before a new file can be created. The advisor believes that the precondition is sufficient space for the new file, which can be achieved either by executing a plan for removing a file or by executing the plan of requesting more space.</Paragraph>
  </Section>
  <Section position="15" start_page="0" end_page="0" type="metho">
    <SectionTitle>
5.2.3 EXPLANATIONS FOR INCORRECT CAUSES
</SectionTitle>
    <Paragraph position="0"> One final class of user misconception is an incorrect belief that a particular state results from a plan's execution; that is, a user belief of the form causes(Ap, Sp).</Paragraph>
    <Paragraph position="1"> Recall that to recognize that this type of belief is incorrect the advisor found beliefs of the form !causes(Ap, Sp) and causes(A, Sp). There are three potential explanations for this type of mistaken belief.</Paragraph>
    <Paragraph position="2">  earlier example. The explanation is that the user is unaware that the user's action actually has a different effect.</Paragraph>
    <Paragraph position="3"> User: I deleted a file by typing &amp;quot;remove&amp;quot;. Advisor: No, typing &amp;quot;remove&amp;quot; did not delete your file. Typing &amp;quot;rm&amp;quot; deleted it. Typing &amp;quot;remove&amp;quot; deletes a mail message from the mail program.</Paragraph>
    <Paragraph position="4"> The user's mistaken belief is typing &amp;quot;remove&amp;quot; deletes a file. The user is unaware that typing &amp;quot;remove&amp;quot; actually throws away old mail messages.</Paragraph>
    <Paragraph position="5"> The second, &amp;quot;Effect Requires Unfulfilled Enablement&amp;quot;, suggests that the user is unaware that a particular state is required for the plan to have the claimed effect.</Paragraph>
    <Paragraph position="6"> User: I was cleaning out my account when I accidentally deleted all the command files by typing &amp;quot;rm&amp;quot;.</Paragraph>
    <Paragraph position="7"> Advisor: You can't delete the command files with &amp;quot;rm&amp;quot; unless you are the system administrator. The user incorrectly believes that typing &amp;quot;rm&amp;quot; resulted in the removal of various system files. The advisor believes that it is necessary for the user to be the system administrator for this effect to occur.</Paragraph>
    <Paragraph position="8"> The last, &amp;quot;Effect Inferred From Other Effect&amp;quot;, accounts for another one of earlier examples. It suggests that the user is unaware that one effect of his plan has incorrectly led him to believe what was another effect of the plan.</Paragraph>
    <Paragraph position="9"> User: I accidentally hit the up arrow key and it deleted 20 unanswered mail messages. How can I get them back? Advisor: Hitting the up arrow does not delete your messages, but does result in your being disconnected from the etherplexer. You could not access your mail messages because they were moved to &amp;quot;mbox&amp;quot;. The mail program requires that your mail messages be in &amp;quot;mailbox&amp;quot;.</Paragraph>
    <Paragraph position="10"> The user incorrectly believes that one effect of hitting uparrow was that his mail messages were deleted. This belief occurs because the user is unaware that one effect of hitting uparrow is that files are moved to a different location, which makes them seem inaccessible.</Paragraph>
  </Section>
  <Section position="16" start_page="0" end_page="0" type="metho">
    <SectionTitle>
6 A DETAILED PROCESS MODEL
</SectionTitle>
    <Paragraph position="0"> We have presented three sets of potential explanations and briefly sketched how they are used. In this section we provide a more detailed view of the process by which an explanation is found.</Paragraph>
    <Paragraph position="1"> An advisor presented with a user belief has three goals. First, he wants to know whether he shares the user's belief. Second, he wants to confirm that the Computational Linguistics, Volume 14, Number 3, September 1988 45 Quilici, Dyer, and Flowers Recognizing and Responding to Plan-Oriented Misconceptions user's belief is indeed a misconception. Third, he wants to infer the reasons behind the user's mistake.</Paragraph>
    <Paragraph position="2"> The advisor accomplishes the first by trying to verify that he holds the user's belief. He accomplishes the second by trying to find an explanation for why he does not hold the user's belief. He accomplishes the third by trying to find an explanation for why the user does hold that belief.</Paragraph>
    <Paragraph position="3"> Two questions need to be answered. How does the advisor verify that he holds a particular belief? And how does the advisor explain why he does not hold a belief, or why the user does?</Paragraph>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
6.1 VERIFYING AN ADVISOR BELIEF
</SectionTitle>
      <Paragraph position="0"> Verifying whether or not the advisor believes that a particular planning relationship holds takes two steps.</Paragraph>
      <Paragraph position="1"> First, the advisor searches his memory for the desired piece of planning knowledge. Then, if it is not found, the advisor applies the set of rules associated with that planning relationship to try and prove that it holds.</Paragraph>
      <Paragraph position="2"> Once the advisor has proved that the planning relationship holds, either by search or by reasoning, that piece of knowledge is noted to be an advisor belief.</Paragraph>
      <Paragraph position="3"> Consider, for example, the process of verifying that the advisor holds a belief that owning a directory is not a precondition of removing a file. If this fact is already known from past experience, the advisor will recognize it during memory search. If not, the advisor can try to deduce it. One rule that applies here says that &amp;quot;it&amp;quot; a state S is not one of the known states that are preconditions to an action A for achieving a goal state, then assume that S is not a precondition.&amp;quot; Here, this means that if owning a directory is not among the known preconditions for removing a file, assume it is not a precondition for removing a file.</Paragraph>
    </Section>
    <Section position="2" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
6.2 FINDING AN EXPLANATION
</SectionTitle>
      <Paragraph position="0"> The advisor must be able to explain why an actor does or does not hold a particular belief. Finding an explanation is accomplished by hypothesizing one associated with the given class of belief and then trying to confirm it. The advisor:  1. Classifies the belief according to its type: applies, enables, or effects.</Paragraph>
      <Paragraph position="1"> 2. Selects one of the potential explanations associated with that class of belief. The potential explanation is an abstract configuration of planning relationships. 3. Instantiates this potential explanation with information from the user's belief.</Paragraph>
      <Paragraph position="2"> 4. Tries to verify each of the planning relationships within the potential explanation. If all can be verified, this potential explanation is the desired explanation. null 5. Repeats the process until one of the potential expla- null nations associated with this belief's type is verified or all potential explanations have been tried and have failed.</Paragraph>
      <Paragraph position="3"> The result of the process of finding an explanation is that thte advisor has verified that he holds a particular set of beliefs. These beliefs constitute the desired explanatkm. null</Paragraph>
    </Section>
    <Section position="3" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
6.3 AN EXAMPLE
</SectionTitle>
      <Paragraph position="0"> This section is a detailed look at the advisor's processing of the user belief that owning a directory is a precondition of removing a file.</Paragraph>
      <Paragraph position="1"> enables(user own directory, use &amp;quot;rm&amp;quot;, the file's removal) First, the advisor tries to verify that he holds the user's belief. He cannot.</Paragraph>
      <Paragraph position="2"> Next, the advisor tries to confirm that the user's belief is, in fact, a misconception. He does this by trying to explain why he does not hold this user belief. He notes that it can be classified as a belief that some state Sp (owning the directory) is a precondition to achieving some other state Sg (removing a file). The potential explanation for why the advisor does not hold this type of belief is that he believes that Sp is not a precondition of achieving Sg, and that some other state S is a precondition of Sg. By instantiating this potential explanation, the advisor determines that he must check whether he holds beliefs that: !enables(owning a directory, use &amp;quot;rm file&amp;quot;, the file's removal) enables(S, use &amp;quot;rm file&amp;quot;, removing a file) The advisor finds that he believes that owning a directory is not a precondition of removing a file (either by finding that relationship in memory or by deducing it). The advisor also finds that write permission on a directory is a precondition of removing a file (that is, that S can be instantiated with write permission on a directory). These matching beliefs confirm that the user's belief is a misconception.</Paragraph>
      <Paragraph position="3"> Now, the advisor has to try to find an explanation for why the user holds this mistaken belief. One potential explanation is that the user is unaware that Sp is actually a precondition of achieving a state S, which is a precondition to achieving Sg. In this case, instantiating Sp and Sg leads to the advisor to try and verify that he holds two beliefs: enables(S, use &amp;quot;rm file&amp;quot;, the file's removal) enables(owning a directory, A, S) These beliefs are verified when the advisor finds that having written permission on a directory is a precondition to removing a file, and that owning a directory is a precondition to obtaining written permission on the directory. The potential explanation suggests that the user's misconception resulted from his being unaware of these two advisor beliefs.</Paragraph>
      <Paragraph position="4"> FinaUy, the advisor presents the resulting beliefs to the user. The user is informed of the beliefs used to</Paragraph>
    </Section>
  </Section>
  <Section position="17" start_page="0" end_page="0" type="metho">
    <SectionTitle>
46 Computati~mal Linguistics, Volume 14, Number 3, September 1988
</SectionTitle>
    <Paragraph position="0"> Quilici, Dyer, and Flowers Recognizing and Responding to Plan-Oriented Misconceptions confirm the user's misconception and the beliefs used to explain its source.</Paragraph>
  </Section>
  <Section position="18" start_page="0" end_page="0" type="metho">
    <SectionTitle>
6.4 THE POINT OF POTENTIAL EXPLANATIONS
</SectionTitle>
    <Paragraph position="0"> Having a taxonomy of potential explanations lessens the amount of reasoning the advisor must do to detect and respond to the user's misconceptions.</Paragraph>
    <Paragraph position="1"> To see why, consider an advisor trying to understand how the user arrived at the mistaken belief that a precondition of removing a file is owning the directory containing it. The advisor is trying to find some connection between the user's enablement and removing a file. The potential explanations suggest how to find specific, likely-to-be-useful connections. For example, the potential explanation &amp;quot;Enablement for Subgoal&amp;quot; suggests examining whether achieving any of the preconditions of removing a file requires owning a directory.</Paragraph>
    <Paragraph position="2"> Without a set of potential explanations, it becomes necessary to reason from a set of rules that describe likely differences between user and advisor beliefs. One rule might be that a user may incorrectly attribute an enablement of one action to another action. Another rule might be that a user may incorrectly attribute the result of one action to another action. From a set of such rules the advisor must somehow deduce the cause of the user's mistake. By using potential explanations the problem becomes instead one of guided memory search rather than reasoning from first principles.</Paragraph>
  </Section>
  <Section position="20" start_page="0" end_page="0" type="metho">
    <SectionTitle>
7.1 OBJECT-ORIENTED MISCONCEPTIONS
</SectionTitle>
    <Paragraph position="0"> ROMPER (McCoy 1985, and this issue) corrects user misconceptions dealing with whether an object is an instance of a particular class of objects or possesses a particular property.</Paragraph>
    <Paragraph position="1"> User: I thought whales were fish.</Paragraph>
    <Paragraph position="2"> ROMPER: No, they are mammals. You may have thought they were fish because they are finbearing and live in the water. However, they are mammals since, while fish have gills, whales breathe through lungs and feed their young with milk.</Paragraph>
    <Paragraph position="3"> ROMPER classifies a user's misconception as either a misclassification or misattribution and then selects one of several strategies associated with each class of misconception to generate a response. Each strategy addresses a different type of reasoning error, and is selected based on ROMPER's own beliefs about objects and its model of the user's relevant beliefs. One such strategy is useful when the advisor believes that X isa Z, the user mistakenly believes that X isa Y, and the advisor believes that X and Y share certain attributes. The strategy suggests presenting these shared attributes as a possible reason for the misclassification, and pointing out the unshared attributes that lead the advisor to believe that X isa Z.</Paragraph>
    <Paragraph position="4"> Despite dealing with a very different class of misconceptions, ROMPER's approach is similar to ours. The major difference is that our explanation-based approach separates the beliefs needed to confirm the user's belief as a misconception from those needed to understand why the user holds it. The strategy above divides into two explanations. The first confirms that a user belief that X isa Y is incorrect if the advisor believes that X isa Z because X and Z share certain attributes. The other suggests that the user may hold this belief because X and Y share certain attributes. The advantage to our approach is that the information regarding the beliefs that confirm that the user has a misconception can be separated from the explanations for why the user holds the belief, and unnecessary duplication of tests is avoided.</Paragraph>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
7.2 PLAN-ORIENTED MISCONCEPTIONS
</SectionTitle>
      <Paragraph position="0"> Two efforts have examined detecting and responding to plan-oriented misconceptions.</Paragraph>
      <Paragraph position="1"> Joshi, Webber, and Weishedel (1984) suggest using a strategy-based approach to provide cooperative responses to problematic planning requests. They consider &amp;quot;How do I do X?&amp;quot; questions in which X can be inferred to be a subgoal of a more important goal Y.</Paragraph>
      <Paragraph position="2"> User: How can I drop cs577? System: It is too late in the quarter to drop it. But you can avoid failing by taking an incomplete and finishing your work next quarter.</Paragraph>
      <Paragraph position="3"> They provide several strategies, listed below, for selecting the contents of a reasonable response, with strategy selection based on the advisor's beliefs about which Computational Linguistics, Volume 14, Number 3, September 1988 47 Quilici, Dyer, and Flowers Recognizing and Responding to Plan-Oriented Misconceptions plans achieve a particular goal and the achievability of their preconditions.</Paragraph>
      <Paragraph position="4">  Situation Response 1. Unachievable Precondition E of X Provide E (a) Plan P achieves Y Provide P (b) No plan to achieve Y Point this out 2. X doesn't help achieve Y Point this out (a) Plan P achieves Y Provide P (b) No plan to achieve Y Point this out 3. Plan P better way to achieve Y Provide P 4. X only way to achieve Y Point this out 5. Plan P involving uncontrollable Provide P  event E achieves Y One such strategy, useful when the advisor believes that X cannot be achieved because of an impossible-toachieve precondition, is to point out the troublesome precondition and suggest an alternate plan that achieves Y.</Paragraph>
      <Paragraph position="5"> Our work differs from theirs in several respects. The main difference is that they focus on correcting the user's misconception instead of trying to explain why it occurred. Only one strategy above is concerned with providing an explanation that addresses the source of a user misconception (in this case, an inappropriate plan). The other strategies describe situations in which achieving X is inappropriate and an alternate plan for Y exists and should be presented to the user as a correction. In addition, they did not consider responding to incorrect beliefs about plan preconditions or effects.</Paragraph>
      <Paragraph position="6"> The other effort, SPIRIT (Pollack 1986), tries to detect the inappropriate plans underlying queries made by users of a computer mail program and the mistaken user beliefs underlying those plans.</Paragraph>
      <Paragraph position="7"> User: I want to prevent Tom from reading my file.</Paragraph>
      <Paragraph position="8"> How can I set the permissions on it to faculty-read only? System: You can make the file readable by faculty only using &amp;quot;set permission&amp;quot;. However, Tom can still read it because he's the system administrator. User misconceptions about the applicability and executability of plans are detected by reasoning about the likely differences between the advisor's beliefs and the user's, with various rules used to infer these differences. One such rule, used to detect the source of the misconception above, states that an advisor who believes that an act has a particular result under certain conditions can infer that the user has a similar belief missing one of the required conditions.</Paragraph>
      <Paragraph position="9"> SPIRIT has a task similar to ours but takes a very different approach, trying to determine the cause of the user's error through reasoning from first principles rather than memory search. In addition, SPIRIT cannot detect or respond to mistakes involving plan applicability conditions or preconditions. Finally, SPIRIT does not specify how knowledge of the cause of the user's mistaken belief affects the information to be included in a cooperative response, something that falls naturally out of our model.</Paragraph>
    </Section>
    <Section position="2" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
7.3 UNIX ADVISORS
</SectionTitle>
      <Paragraph position="0"> Finally, there are two other related research efforts, UC (Wilensky et al. 1986, Wilensky, Arens, and Chin 1984) and SC. (Kemke 1986), that address providing advice to novice UNIX users. Neither system, however, detects or responds to misconceptions. Instead, both are concerned with tailoring a response to a question to reflect the user's level of expertise. UC's user modeling component, KNOME (Chin 1986), analyzes a user's questions to determine which stereotypical class the user belongs to and then uses this information to provide more details and possibly more examples to less experienced users.</Paragraph>
      <Paragraph position="1"> Novice: What does the &amp;quot;rwho&amp;quot; command do? UC: Rwho lists all users on the network, their tty, their login time, and their idle time.</Paragraph>
      <Paragraph position="2"> Expert: What does the &amp;quot;rwho&amp;quot; command do? UC: Rwho is like who, except rwho lists all users on the network.</Paragraph>
      <Paragraph position="3"> SC's user modeling component, SCUM (Nessen 1987), takes an approach similar to UC's, also using stereotypical information. These approaches are complementary to ours.</Paragraph>
    </Section>
  </Section>
  <Section position="21" start_page="0" end_page="0" type="metho">
    <SectionTitle>
8 IMPLEMENTATION DETAILS
</SectionTitle>
    <Paragraph position="0"> The theory discussed in this paper is embodied in AQUA, a computer program currently under development at UCLA. The current version of AQUA is implemented in T (Rees, Adams, and Meehan 1984), using RHAPSODY (Turner and Reeves 1987), a graphical AI tools environment with Prolog-like unification and backtracking capabilities, and runs on an Apollo DN460 workstation. Given a set of user beliefs involving plan applicability conditions, preconditions, or effects, AQUA determines which of these user beliefs are incorrect and what missing or mistaken user beliefs are likely to have led to them, and then produces a set of advisor beliefs that capture the content of the advisor's response. AQUA's domain of expertise is in the basic plans used to manipulate and access files, directories, and electronic mail. It has been used to detect and respond to at least two different incorrect user beliefs in each class of misconception that we have identified.</Paragraph>
    <Paragraph position="1"> More detailed descriptions of the program's implementation can be found in Quilici, Flowers, and Dyer (1986), and in Quilici (1985).</Paragraph>
  </Section>
  <Section position="22" start_page="0" end_page="0" type="metho">
    <SectionTitle>
9 LIMITATIONS AND FUTURE WORK
</SectionTitle>
    <Paragraph position="0"> Our approach to determining why an actor does or does not hold a particular belief has been to let potential explanations direct the search for the advisor beliefs that serve as an appropriate explanation. Our focus has been on discovering and representing these explanations. The limitations of our approach arise in areas we have ignored, each of which is an interesting area of research.</Paragraph>
    <Paragraph position="1"> 48 Computational Linguistics, Volume 14, Number 3, September 1988 Quilici, Dyer, and Flowers Recognizing and Responding to Plan-Oriented Misconceptions</Paragraph>
  </Section>
  <Section position="23" start_page="0" end_page="0" type="metho">
    <SectionTitle>
9.1 INFERRING THE SET OF USER BELIEFS
</SectionTitle>
    <Paragraph position="0"> Our model assumes that the user's problem description has somehow been parsed into a set of beliefs. However, users rarely explicitly state their beliefs, leaving the advisor to the difficult task of inferring them.</Paragraph>
    <Paragraph position="1"> Consider our introductory exchange.</Paragraph>
    <Paragraph position="2"> User: I tried to remove a file with the &amp;quot;rm&amp;quot; command. But the file was not removed and the error message was permission denied. I checked and I own the file. What's wrong? Advisor: To remove a file, you need to be able to write into the directory containing it. You do not need to own the file.</Paragraph>
    <Paragraph position="3"> Here the advisor must infer the user's beliefs that 1. using &amp;quot;rm&amp;quot; is applicable to removing a file; that 2. using &amp;quot;rm&amp;quot; did not cause the file's removal; that 3. using &amp;quot;rm&amp;quot; resulted in an error message, and that 4. owning a file is a precondition to removing it.</Paragraph>
    <Paragraph position="4"> Inferring the first belief requires a rule such as &amp;quot;if the user tries to achieve a state with a particular action, assume the user believes that action achieves that state.&amp;quot; The second belief can be inferred from the rule that &amp;quot;if an utterance describes the nonexistence of a state that is a believed result of an action, assume that the user believes that the action did not cause the state.&amp;quot; A similar rule can be used to infer the third belief.</Paragraph>
    <Paragraph position="5"> Inferring the final belief, that owning a file is a precondition to its removal, is a difficult task. Because there are a potentially-infinite number of incorrect user beliefs about the preconditions of removing a file, the advisor cannot simply match owning a file against a list of incorrect preconditions. Because the user may have been discussing other plans and other goals the advisor cannot simply assume that any utterance after a plan's failure refers to its preconditions. Instead, the advisor needs to infer this user belief from the knowledge that the user did some sort of verify-action, the knowledge that one plan for dealing with a plan failure is to try to verify that the enablements of the plan have been achieved, and the knowledge that both owning the file and having write permission are different instantiations of having sufficient permission.</Paragraph>
    <Paragraph position="6"> Inferring beliefs like these, that involve the user's plans and goals and the relationships between, even when they differ from the advisor's, is currently an active area of research (Carberry, this issue; Kautz and Allen 1986, Goodman 1986, Wilensky et al. 1986, Quilici 1985).</Paragraph>
  </Section>
  <Section position="24" start_page="0" end_page="0" type="metho">
    <SectionTitle>
9.2 RETRIEVING ADVISOR BELIEFS
</SectionTitle>
    <Paragraph position="0"> Our potential explanations suggest patterns of beliefs that the advisor should search for. However, we have not specified how this search of the advisor's memory is actually carried out, how a belief in memory can be retrieved efficiently, or how the beliefs are actually acquired through experience. AQUA's organization of plan-oriented beliefs is discussed in Quilici (1988, 1985). It is based on earlier work (Kolodner 1985, Schank 1982) in taking experiences and indexing them appropriately for efficient search and retrieval, especially that involving indexing memory around various planning failures (Kolodner and Cullingford 1986, Quilici 1985, Hammond 1984, Dyer 1983).</Paragraph>
    <Paragraph position="1"> Because the advisor may need to verify a belief that is not stored directly in memory, memory search may not be sufficient. Suppose the advisor is trying to verify that owning a directory is not required to remove a file.</Paragraph>
    <Paragraph position="2"> The advisor may be able to deduce this belief from a past experience in which he removed a file from/trap, a directory owned by the system administrator. Similarly, the advisor may be able to deduce that write permission is needed to remove a file from his beliefs that write permission is needed to make changes on objects and that removing a file involves making a change to a directory. This requires more powerful reasoning capabilities than AQUA's simple rules for inferring negated beliefs.</Paragraph>
    <Paragraph position="3"> Finally, AQUA assumes the existence of a taxonomy of planning failures. We have left the automatic creation of this taxonomy from advisor experiences to future research. Initial work in recognizing and indexing abstract configurations of planning relations is discussed in Dolan and Dyer (1985, 1986).</Paragraph>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
9.3 OTHER CLASSES OF MISCONCEPTIONS
</SectionTitle>
      <Paragraph position="0"> We are currently studying how well the classes of misconceptions described here account for responses to misconceptions in domains other than the problems of novice computer users, such as the domain of simple day-to-day planning. In addition, we are examining other classes of planning misconceptions. For example, to respond to an incorrect user belief such as &amp;quot;rm&amp;quot; cannot be used to remove a file, the advisor needs potential explanations for why an action does not apply to a particular goal state.</Paragraph>
      <Paragraph position="1"> We do not yet know whether our approach is suitable for generating responses to misconceptions that are not directly related to plan-goal interactions, such as mistakes in referring to an object. Consider the following exchange: User: Diana is up, but I cannot access my file.</Paragraph>
      <Paragraph position="2"> Advisor: Your files are on Rhea, not Diana. They moved your files yesterday because your file system was full.</Paragraph>
      <Paragraph position="3"> Here the user's problem is that he is incorrectly using &amp;quot;Diana&amp;quot; to refer to the machine his files are on. We are examining whether our approach is extendable to respond to these types of user misconceptions.</Paragraph>
    </Section>
  </Section>
  <Section position="25" start_page="0" end_page="0" type="metho">
    <SectionTitle>
9.4 RESPONSE GENERATION
</SectionTitle>
    <Paragraph position="0"> The response we provide is a set of advisor beliefs that is as complete as possible. We make no attempt to use knowledge about other user beliefs to modify our response to provide only the most relevant beliefs. However, if the advisor can infer that a user knows that his Computational Linguistics, Volume 14, Number 3, September 1988 49 Quilici, Dyer, and Flowers Recognizing and Responding to Plan-Oriented Misconceptions plan has failed (perhaps because of the error message a command produces), he need not inform the user that his plan is incorrect. One straightforward way to ex'tend our model is to have the advisor filter out those beliefs he can infer the user has.</Paragraph>
    <Paragraph position="1"> The advisor should use information about the user to tailor their response based on the user's level of expertise. Recall the following exchange: User: I tried to remove my directory and I got an error message &amp;quot;directory not empty&amp;quot;. But &amp;quot;Is&amp;quot; didn't list any files.</Paragraph>
    <Paragraph position="2"> Advisor: Use &amp;quot;Is -a&amp;quot; to list all of your files. &amp;quot;Is&amp;quot; cannot be used to list all of your because &amp;quot;Is&amp;quot; does not list those files whose names begin with a period.</Paragraph>
    <Paragraph position="3"> An advisor who knows the user is a novice might want to augment his response with an explanation that &amp;quot;-a&amp;quot; is a command option and that command options cause changes in the normal behavior of commands. Several researchers are working on tailoring the response to the user based on knowledge about the user's expertise (Paris, this issue; Chin 1986).</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML