File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/93/h93-1033_metho.xml

Size: 18,050 bytes

Last Modified: 2025-10-06 14:13:19

<?xml version="1.0" standalone="yes"?>
<Paper uid="H93-1033">
  <Title>Generic Plan Recognition for Dialogue Systems</Title>
  <Section position="3" start_page="0" end_page="171" type="metho">
    <SectionTitle>
2. Plan Graphs
</SectionTitle>
    <Paragraph position="0"> We assume that the underlying knowledge representation formalism can be effectively partitioned into two types of formulas: null * Event formulas state that something happened that (possibly) resulted in a change in the world.</Paragraph>
    <Paragraph position="1"> * Fact formulas are everything else, but typically describe properties of the world (possibly temporally qualified).  In our temporal logic, 1 the former are of the form Occurs(e) and the latter are, for example, At(eng3, dansville, now). For formalisms where there are no explicit events (e.g., the situation calculus), we can extend the language--an example of this is given below.</Paragraph>
    <Paragraph position="2"> We then define a graphical notion of plans, based on viewing them as arguments that a certain course of events under certain explicit conditions will achieve certain explicit goals. A plan graph is a graph over two types of nodes: event nodes are labelled with event formulas,fact nodes are labeled with fact formulas. These can be connected by four types of arcs: event-fact: Achievement fact-event: Enablement event-event: Generation fact-fact: Inferential The link types correspond roughly to an intuitive classification of the possible relations between events and facts (cf., \[5\]). The goal nodes of a plan graph are its sinks, the premise nodes are its sources.</Paragraph>
    <Paragraph position="3"> For example, using the temporal logic, we might have a plan graph like that shown in Figure l(a). The functions blkl and blk2 are role functions that denote objects participating in the event; the functions prel and effl are temporal role functions denoting intervals related to the time of the event. In a forrealism such as the situation calculus, actions are terms and there is no equivalent of the Occurs predicate. However, we can introduce one as a placeholder, and then we might get a plan graph like that shown in Figure l(b).</Paragraph>
    <Paragraph position="4"> A plan graph makes no claim to either correctness or completeness. It represents an argument from its premises to its goals, and as such can be &amp;quot;correct,&amp;quot; &amp;quot;incorrect,&amp;quot; or neither. The previous examples are intuitively correct, for example, but are incomplete since they don't specify that the block being stacked must also be clear for the stacking to be successful. null A translation of plan graphs into a first-order logic with quotation is straightforward. With this, one can declaratively define properties of plans represented by plan graphs (such as &amp;quot;correct&amp;quot;) relative to the underlying representation's entailment relation. For example, a node n in a plan graph P might be supported if its preconditions (nodes with arcs incident on n) are sufficient to ensure the truth of n, formally: supported(n, P) =-A {rc 13n'.(n',n) ~ P ^ rc = Label(n')} ~ Label(n) ZSpace precludes a detailed description of this representation, see \[2, 3, 4\]. In what follows, we will rely on intuitive descriptions of the relevant aspects of the logic.</Paragraph>
    <Paragraph position="5"> The antecent of the entailment must, of course, also be consistent. null Unfortunately, such an analysis is not particularly illuminating in the case of plans arising from dialogue since such plans are often too poorly specified to meet such criteria. In par-Ocular, they are often based on assumptions that the system makes in the course of its interpretation of the manager's statements. We feel that making such assumptions explicit is crucial since they often drive the discourse. To illustrate this, we will present the algorithms used by the TRAINS plan reasoner to reason with plan graphs. We will return to the issue of axiomatizing them in the final section.</Paragraph>
  </Section>
  <Section position="4" start_page="171" end_page="173" type="metho">
    <SectionTitle>
3. Plan Graph Algorithms
</SectionTitle>
    <Paragraph position="0"> We characterize plan reasoning for dialogue systems as search through a space of plan graphs. The termination criterion for the search depends on the type of recognition being done, as will be described presently. Since the plan graph formalism sanctions arbitrarily complex graphs labelled with arbitrarily complex formulas, searching all possible plan graphs is impossible. We therefore rely on additional properties of the underlying representation to restrict the search.</Paragraph>
    <Paragraph position="1"> First, we assume the ability to test whether two objects (including events and facts) unify and, optionally, to determine assumptions under which they would unify. Simple objects use simple equality. In the temporal logic, two events are equal if their roles are equal. Two facts unify if there are assumptions that make them logically equivalent. This use of equality and inequality corresponds to the posting of codesignation constraints in traditional planners.</Paragraph>
    <Paragraph position="2"> Second, we assume that events be defined using relations corresponding to enablers, effects, and generators. This should not be controversial. In the temporal logic, these descriptions can be obtained from the event definition axioms. For a STRIPS system, they correspond to the add- and deletelists. Existing plan recognition systems use an event taxonomy, which corresponds to the generators slot. There can be multiple definitions of an event type, thereby allowing alternative decompositions or conditional effects.</Paragraph>
    <Paragraph position="3"> The search then only considers plan graphs that reflect the structure of the event definitions, we call such plan graphs acceptable. In this respect, the search will only find plan graphs that agree with the assumed-shared &amp;quot;event library.&amp;quot; However, information returned from failed searches can be used to guide the repair of apparent incompatibilities at the discourse level.</Paragraph>
    <Section position="1" start_page="171" end_page="172" type="sub_section">
      <SectionTitle>
3.1. Incorporation
</SectionTitle>
      <Paragraph position="0"> Plan recognition using plan graphs operates by searching the space of acceptable plan graphs breadth-first. The search  frontier is expanded by the function expand-graph, shown in Figure 2. The use of breadth-first search implements a &amp;quot;shortest-path&amp;quot; heuristic--we prefer the simplest connection to the existing plan. The plan reasoner exports several interfaces to the basic search routine, each motivated by the discourse phenomena noted at the outset. The discourse module of the system invokes these procedures to perform domain reasoning.</Paragraph>
      <Paragraph position="1"> The procedure incorp-event takes as parameters a plan graph and an event (a term or a lambda expression representing an event type). For example, sentence (la) results in the following call:  where ?e'Move-Engine is an event variable of type Move-Engine.</Paragraph>
      <Paragraph position="2"> The plan reasoner first checks if the given event unifies with an event already in the plan. If so, the plan reasoner signals that nothing needed to be added to the plan (except possibly unifying assumptions, which are also indicated). Otherwise, it attempts to add an event node to the plan graph labelled with (an instance of) the event. The search continues until one or more unifying event nodes are found. 2 An example of the search in progress for the previous call is given in Figure 3, assuming that the plan already includes moving some oranges to Dansville (event el). At this point (two levels of search), the given Move-Engine event unties uniquely with a leaf node, so the search terminates successfully. The connecting path (double arrows) indicates that moving the engine is done to move a car that will contain the oranges, thus moving them. Note that we do not know yet which car this will be. Zln fact. we also use a depth bound, based on the intuition that if the connection is not relatively short, the user's utterance has probably been misinterpreted.</Paragraph>
      <Paragraph position="3"> If more than one match is found at the same depth, the plan reasoner signals the ambiguity to the discourse module for resolution. Otherwise the connecting path is returned as a list of things that need to be added to the plan to incorporate the given event. These are usually interpreted by the discourse module as being implicatures of the user's utterance. They are added to a plan context and are used both for subsequent planning and plan recognition steps and to generate utterances when the system gets the turn.</Paragraph>
      <Paragraph position="4"> The procedure ineorp-role-filler is used for statements that mention objects to be used in the plan (example (2) previously). In this case, the termination criterion for the search is an event node labelled by an event that has a role that unities with the given object (a term or lambda expression). For example, the sample sentences result in the following calls:</Paragraph>
      <Paragraph position="6"> Finally, there is the procedure incorp-fact that searches for a fact node that would unify with the given one. This is used for utterances like the examples (3) and (4), since the plan graph representation supports inferential (fact-fact) links. Again however, the search space of potential unifying formulas is infinite. We therefore only consider certain candidates, based on syntactic considerations. These include facts that the underlying reasoning system is particularly good at, such as temporal constraints or location reasoning. Continued use of the system will identify which inferences need to be made at this level, and which are best left to management by higher-level discourse manager routines.</Paragraph>
    </Section>
    <Section position="2" start_page="172" end_page="173" type="sub_section">
      <SectionTitle>
3.2. Goals
</SectionTitle>
      <Paragraph position="0"> These incorp- routines all take an existing plan graph as argument and expand it. This could come from an initial speecification, but utterances like example (5) require that the plan reasoner be able to incorporate goals, There is therefore an incorp-goal procedure that takes a sentence and a (possibly  empty) plan graph as arguments. If the sentence is Occurs(e), then the plan graph is searched for a matching event node.</Paragraph>
      <Paragraph position="1"> If one is found, then the plan reasoner returns relevant assumptions and marks the node as a goal. Otherwise, a new event node is added to the plan and marked as a goal. Similar processing is done for fact goals. In our dialogues, the user often begins by communicating a goal that the rest of the dialogue is concerned with achieving. There is no point in doing much work for goals (beyond checking consistency) since it is likely to be immediately elaborated upon in subsequent utterances. Proper treatment of subgoals expressed as goals is part of our current work on subplans.</Paragraph>
    </Section>
    <Section position="3" start_page="173" end_page="173" type="sub_section">
      <SectionTitle>
3.3. Purpose clauses
</SectionTitle>
      <Paragraph position="0"> One construction that uses subgoals and subplans and that arises repeatedly in collaborative dialogue is the use of purpose clauses, such as example sentences (6). To accomodate these, the incorp- functions all accept an optional &amp;quot;purpose&amp;quot; argument (an event). For example, the sample sentences result in the following calls:  If the purpose argument is present, it is first incorporated using incorp-event. If this fails, then the discourse module is notified--presumably this is some kind of presupposition failure requiring discourse-level action. If it succeeds, then the original item is incorporated but with the search restricted to the (sub-)plan graph rooted at the purpose event.</Paragraph>
      <Paragraph position="1"> This simple modification of the basic plan recognition algorithms is effective at reducing the ambiguity that would otherwise be detected if the entire plan graph were searched. It is likely not adequate for all types of purpose or rationale clause, in particular those that involve the mental state of the agent rather than domain events. However, the generality of the plan graph formalism does allow it to handle many of the cases arising in our dialogues.</Paragraph>
    </Section>
  </Section>
  <Section position="5" start_page="173" end_page="175" type="metho">
    <SectionTitle>
4. Example
</SectionTitle>
    <Paragraph position="0"> To further illustrate our approach to plan reasoning, we present a sample TRAINS dialogue and describe how it is processed by the system. This dialogue was gathered from simulations where a person played the role of the system. A previous version of the TRAINS system processed the dialogue correctly--the current implementation will also once it is completed.</Paragraph>
    <Paragraph position="1"> The manager starts by communicating her goals, making several statements, and asking a question. The system replies and makes a proposal, which is then accepted by the man- null ager. The complete transcript is as follows: 1. M: We have to make OL 2. M: There are oranges at Avon and an OJ factory at Bath. 3. M: Engine E3 is scheduled to arrive at Avon at 3pm. 4. M: Shall we ship the oranges? 5. S:Ok.</Paragraph>
    <Paragraph position="2"> 6. S: Shall I start loading the oranges into the empty car at Avon? 7. S:Ok.</Paragraph>
    <Paragraph position="3">  The manager's first utterance results in the following call to the plan reasoner:  As described above, this results in an event node begin added to the (formerly empty) plan.</Paragraph>
    <Paragraph position="4"> Utterance (2) could be intelpreted simply as statements about the world. However, since the system already knows these  facts (and assumes the manager knows it knows, etc.), the utterance is interpreted as suggesting use of the objects, resuiting in the following calls: (incorp-role-filler ol THE-PLAN) (incorp-role-filler fl THE-PLAN) The constants ol and fl are determined by the scope and reference module.</Paragraph>
    <Paragraph position="5"> For the first call, the Make-0J event has a role for some oranges, but there is a constraint that they must be at the location of the factory. While the system does not yet know which factory this will be, it can deduce that Avon cannot be that city since there is no factory there. Since the system knows only that the oranges are at Avon now (by assumption), they cannot be used directly for the Make-0J. The plan reasoner therefore searches the space of acceptable plan graphs breadth-first, as described above. A connection is found by assuming that the oranges will be moved from Avon to the factory (wherever it turns out to be) via a Move-0ranges event. A description of this path (with assumptions) is returned to the discourse module. For the second call, the factory is acceptable as a role of the Make-0J event, so only the required equality assumption is returned. This has the additional effect of determining to where the oranges are shipped (Bath).</Paragraph>
    <Paragraph position="6"> Utterance (3) is also non-trivial to connect to the plan. We presume that the system already knows of E3's imminent arrival in the form of a sentence like (Occurs e0*Arrive). Again therefore, the statement is therefore taken to suggest the use of E3 in the plan. The system can reason about the effects of the Arrive event, in this case that E3 will be at Avon at 3pro. Even so, there is no event with a role for an engine in the plan yet, so the space of acceptable plans is again searched breadth-first. In this case, a connection is possible by postulating a MoveCar event that generates the previouslyadded Move-0ranges event, and a Move-Engine event that generates the Move-Car.</Paragraph>
    <Paragraph position="7"> The manager then makes the query (4), thereby relinquishing the turn. The dialogue module evaluates the query by calling the plan reasoner with: (incorp-event (lambda ?e'Move-Oranges (Eq? (oranges ?e) oi)) THE-PLAN) The plan reasoner finds the Move-0ranges event added a result of utterance (2), and indicates this to the discourse module. The system therefore replies with utterance (5), implicitly accepting the rest of the plan as well.</Paragraph>
    <Paragraph position="8"> The plan reasoner is then called to elaborate the plan, during which it performs fairly traditional means-ends planning to attempt to flesh out the plan. In so doing, it attempts to satisfy or assume preconditions and bind roles to objects in order to generate a supported plan. It freely makes consistent persistence assumptions by assuming inclusion of one unconstrained temporal interval within another known one. It can ignore some details of the plan, for example the exact route an engine should take. These can be reasoned about if necessary (i.e., if the human mentions them) but can be left up to the agents otherwise.</Paragraph>
    <Paragraph position="9"> In the example scenario, many things can be determined unambiguously. For example, the oranges should be unloaded at Bath at the appropriate time, leading to an event of type Unload. The choice of car for transporting the oranges, however, is ambiguous: in the scenario, there is an empty car at Avon as well as one attached to E3. The plan reasoner sig- null nals the ambiguity to the discourse module, which chooses one alternative and proposes it, leading to utterance (6). At this point the manager regains the turn and the dialogue continues until the system believes it has a mutually agreed upon plan. In this example, the manager accepts the system's suggestion, and the plan reasoner determines that the plan is ready for execution by the agents in the simulated TRAINS world.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML