File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/89/e89-1005_metho.xml

Size: 22,730 bytes

Last Modified: 2025-10-06 14:12:13

<?xml version="1.0" standalone="yes"?>
<Paper uid="E89-1005">
  <Title>A METAPLAN MODEL FOR PROBLEM-SOLVING DISCOURSE*</Title>
  <Section position="4" start_page="0" end_page="0" type="metho">
    <SectionTitle>
3. QUERY METAPLANS
</SectionTitle>
    <Paragraph position="0"> Although the plan-building metaplans that model the exploration of possible plans and the gradual refinement of an intended plan represent the agent's underlying intent, such moves are seldom observed directly in the expert advising setting. The agent's main observable actions are queries of various sorts, requests for information to guide the plan-building choices. While these queries do not directly add structure to the domain plan being considered, they do provide the expert with indirect evidence as to the plan-building choices the agent is considering. A key advantage of the metaplan approach is the precision with which it models the space of possible queries motivated by a given plan-building context, which in turn makes it easier to predict underlying plan-building structure based on the observed queries. The query metaplans include both plan feasibility queries about plan preconditions and slot data queries that ask about the possible fillers for free variables.</Paragraph>
    <Paragraph position="1">  The simplest feasibility query metaplan is ask-pred-value, which models at any build-plan node a query for a relevant value from one of the preconditions of that domain plan. For example, recalling the original IncreaseGroupReadiness context in which the Knox had been damaged, if the agent's first query in that context is &amp;quot;Where is Knox?&amp;quot;, the expert',~ task becomes to extend the context model in a way that explains the occurrence of that query. While that search would need to explore various paths, one match can be found by applying the sequence of metaplans shown in Figure 5.</Paragraph>
    <Paragraph position="3"> The build-subplan (2) and build-plan (3) nodes, as before, model the agent's choice to consider replacing the damaged ship. Because the ReplaceShip domain plan includes among its preconditions (not shown here) a predicate for the location of the damaged ship as the destination for the replacement, the ask-pred-value metaplan (4) can then match this query, explaining the agent's question as occasioned by exploration of the ReplaceShip plan. Clearly, there may in general be many metaplan derivations that can justify a given query. In this example, the RepairShip plan might also refer to the loca-tion of the damaged ship as the destination for transporting spare parts, so that this query might also arise from consideration of that plan. Use of such a model thus requires heuristic methods for maintaining and ranking alternative paths, but those are not described here.</Paragraph>
    <Paragraph position="4"> The other type of plan feasibility query is check-pred-value, where the agent asks a yes/no query about the value of a precondition. As an example of that in a context that also happens to require a deeper search than the previous example, suppose the agent followed the previous query with &amp;quot;Is Roark in the Suez?&amp;quot;. Figure 6 shows one branch the search would follow, building down from the build-plan for Replace- null Here the search has to go through instantiate-var and build-subaction steps. The ReplaceShip plan has a subaction (Sail ?ship ?old-loc ?newloc) with a precondition (location-of ?ship ?oldloc) that can match the condition tested in the query. However, if the existing build-plan node (1) were directly expanded by build-subaction to a build-plan for Sail, the ?new-ship variable would not be bound, so that that path would not fully explain the given query. The expert instead must deduce that the agent is considering the Roark as an instantiation for ReplaceShip's ?new-ship, with an instantiate-var plan (2) modeling that tentative instantiation and producing a build-plan for ReplaceShip (3) where the ?new-ship variable is properly instantiated so that its Sail sub-action (5) predicts the actual query correctly.</Paragraph>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
Slot Data Queries
</SectionTitle>
      <Paragraph position="0"> While the feasibility queries ask about the values of plan preconditions, the slot data queries gather data about the possible values of a free plan variable. The most frequent of the slot data query metaplans is ask-fillers, which asks for a list of the items that are of the correct type and that satisfy some subset of the precondition requirements that apply to the filler of the free variable. For example, an ask-fillers node attached beneath the build-plan for ReplaceShip in Figure 6 (1) could model queries like &amp;quot;List the frigates.&amp;quot; or &amp;quot;List the C1 frigates.&amp;quot;, since the ?new-ship variable is required by the preconditions of ReplaceShip to be a frigate in the top readiness condition.</Paragraph>
      <Paragraph position="1"> An ask-fillers query can also be applied to a context already restricted by an add-constraint metaplan to match a query that imposes a restriction not found in the plan preconditions.</Paragraph>
      <Paragraph position="2"> Thus the ask-fillers node in line (4) of Figure 7 would match the query &amp;quot;List the C1 frigates that are less than 500 miles from the Knox.&amp;quot; since it is applied to a build.plan node that already inherits that added distance constraint.</Paragraph>
      <Paragraph position="4"> Note that it is the query that indicates to the expert that the agent has decided to restrict consideration of possible fillers for the ?new-ship slot to those that are closest and thus can most quickly and cheaply replace the Knox, while the restriction in turn serves to make the query more efficient, since it reduces the number of items that must be included, leaving only those most likely to be useful.</Paragraph>
      <Paragraph position="5"> There are three other slot data metaplans - 38 that are closely related to ask.fillers in that they request information about the set of possible fillers but that do not request that the set be listed in full. The ask-cardinality metaplan requests only the size of such a set, as in the query &amp;quot;How many frigates are CI?&amp;quot;. Such queries can be easier and quicker to answer than the parallel ask-fillers query while still supplying enough information to indicate which planning path is worth pursuing. The check-cardinality metaplan covers yes/no queries about the set size, and askexistence covers the bare question whether the given set is empty or not, as in the query &amp;quot;Are there any C1 frigates within 500 miles of Knox?&amp;quot;.</Paragraph>
      <Paragraph position="6"> In addition to the slot data metaplans that directly represent requests for information, modeling slot data queries requires metaplans that modify the information to be returned from such a query in form or amount. There are three such query modifying metaplans, limitcardinality, sort.set-by-scalar, and ask-attributevalue. The limit-cardinality modifier models a restriction by the agent on the number of values to be returned by an ask-fillers query, as in the queries &amp;quot;List 3 of the frigates.&amp;quot; or &amp;quot;Name a C1 frigate within 500 miles of Knox.&amp;quot;. The sort.set.by-scalar metaplan covers cases where the agent requests that the results be sorted based on some scalar function, either one known to be relevant from the plan preconditions or one the agent otherwise believes to be so. The function of ask-attribute-value is to request the display of additional information along with the values returned, for example, &amp;quot;List the frigates and how far they are from the Knox.&amp;quot;.</Paragraph>
      <Paragraph position="7"> These modification metaplans can be combined to model more complex queries. For example, sort-set-by-scalar and ask-attribute-value are combined in the query &amp;quot;List the C1 frigates in order of decreasing speed showing speed and distance from the Knox.&amp;quot;. In the metaplan tree, branches with multiple modifying metaplans show their combined effects in the queries they will match. For example, Figure 8 shows the branch that matches the query &amp;quot;What are the 3 fastest frigates?&amp;quot;. The sort-set-by.scalar metaplan in line (2) requests the sorting of the possible fillers of the ?new-ship slot on the basis of descending speed, and the limit-cardinality metaplan in that context then restricts the answer to the first 3 values on that sorted list.</Paragraph>
      <Paragraph position="8"> As shown in these examples, the slot data query metaplans provide a model for some of the rich space of possible queries that the agent can use to get suggestions of possible fillers.</Paragraph>
      <Paragraph position="9"> Along with the plan feasibility metaplans, they model the structure of possible queries in their relationship to the agent's plan-refining and variable-instantiating moves. This tight modeling of that connection makes it possible to predict what queries might follow from a particular plan-building path and therefore also to track more accurately, given the queries, which plan-building p~ths the agent is actually considering. null</Paragraph>
      <Paragraph position="11"/>
    </Section>
  </Section>
  <Section position="5" start_page="0" end_page="0" type="metho">
    <SectionTitle>
4. COMPARISON WITH OTHER
PLAN-BASED DISCOURSE MODELS
</SectionTitle>
    <Paragraph position="0"> The use of plans to model the domain task level organization of discourse goes back to Grosz's (1977) use of a hierarchy of focus spaces derived from a task model to understand anaphora. Robinson (1980a, 1980b) subsequently used task model trees of goals and actions to interpret vague verb phrases. Some of the basic heuristics for plan recognition and plan tracking were formalized by Allen and Perrault (1980), who used their plan model of the agent's goals to provide information beyond the direct answer to the agent's query. Carberry (1983, 1984, 1985a, 1985b) has extended that into a plan-tracking model for use in interpreting pragmatic ill-formedness and intersentential ellipsis. The approach presented here builds on those uses of plans for task modeling, but adds a layer modeling problem-solving structure. One result is that the connection between queries and plans that is implemented in those approaches either directly in the system code or in sets of inference rules is implemented here by the query metaplans. Recently, Kautz (1985) has outlined a logical theory for plan tracking that makes use of a classification of plans based on their included actions. His work suggested the structure of plan classes based on effects and preconditions that is used here to represent the agent's partially specified plan during the problem-solving dialogue.</Paragraph>
    <Paragraph position="1"> ~ - 39 Domain plan models have also been used as elements within more complete discourse models. Carberry's model includes, along with the plan tree, a stack that records the d~_scourse context and that she uses for predicting the discourse goals like accept-question or expresssurprise that are appropriate in a given discourse state. Sidner (1983, 1985) has developed a theory of &amp;quot;plan parsing&amp;quot; for distinguishing which of the plans that the speaker has in mind are plans that the speaker also intends the hearer to recognize in order to produce the intended response. Grosz and Sidner (1985) together have recently outlined a three-part model for discourse context; in their terms, plan models capture part of the intentional structure of the discourse. The metaplan model presented here tries to capture more of that intentional structure than strictly domain plan models, rather than to be a complete model of discourse context.</Paragraph>
    <Paragraph position="2"> The addition of metaplans to plan-based models owes much to the work of Wilensky (1983), who proposed a model in which metaplans, with other plans as arguments, were used to capture higher levels of organization in behavior like combining two different plans where some steps overlap. Wilensky's metaplans could be nested arbitrarily deeply, providing both a rich and extensive modeling tool. Litman (1985) applied metaplanning to model discourse structures like interruptions and clarification subdialogues using a stack of metaplan contexts. The approach taken here is similar to Litman's in using a metaplan component to enhance a plan-hased discourse model, but the metaplans here are used for a different purpose, to model the particular strategies that shape problem-solving discourse. Instead of a small number of metaplans used to represent changes in focus among domain plans, we have a larger set modeling the problem-solving and query strategies by which the agent builds a domain plan.</Paragraph>
    <Paragraph position="3"> Because this model uses its metaplans to capture different aspects of discourse structure than those modeled by Litman's, it also predicts other aspects of agent problem-solving behavior.</Paragraph>
    <Paragraph position="4"> Because it predicts which queries can be generated by considering particular plans, it can deduce the most closely related domain plan that could motivate a particular query. For instance, when the agent asked about frigates within 500 miles of Knox, the constraint on distance from Knox suggested that the agent was considering the ReplaceShip plan; a similar constraint on distance from port would suggest a RepairShip plan, looking for a ship to transport replacement parts to the damaged one. Another advantage of modeling this level of structure is that the metaplan nodes capture the stack of contexts on which follow-on queries might be based. In this example, follow-on queries might add a new constraint like &amp;quot;with fuel at 80% of capacity&amp;quot; as a child of the existing add-constraint node, add an alternative constraint like &amp;quot;within 1000 miles of Knox&amp;quot; as a sibling, query some other predicate within ReplaceShip, or attach even further up the tree. As pointed out below in Section 6, the metaplan structures presented here can also be extended to model alternate problem-solving strategies like compare-plan vs. build-plan, thus improving their predictive power through sensitivity to different typical patterns of agent movement within the metaplan tree. The clear representation of the problem-solving structure offered in this model also provides the right hooks for attaching heuristic weights to guide the plan tracking system to the most likely plan context match for each new input. Within problem-solving settings, a model that captures this level of discourse structure therefore strengthens an NL system's abilities to track the agent's plans and predict likely queries.</Paragraph>
  </Section>
  <Section position="6" start_page="0" end_page="0" type="metho">
    <SectionTitle>
5. APPLICATIONS AND
IMPLEMENTATION
</SectionTitle>
    <Paragraph position="0"> This improved ability of the metaplan model to track the agent's problem-solving process and predict likely next moves could be applied in many of the same contexts in which domain plan models have been employed, including anaphora and ellipsis processing and generating cooperative responses. For example, consider the following dialogue where the cruiser Biddle has had an equipment failure: Agent: Which other cruisers are in the Indian Ocean? (1) Expert: &lt;Lists 6 cruisers&gt; (2) Agent: Any within 200 miles of Biddle? (3) Expert: Home and Belknap. (4) Agent: Any of them at Diego Garcia? (5) Expert: Yes, Dale, and there is a supply flight going out to Biddle tonight. (6) The agent first asks about other cruisers that may have the relevant spare parts. The expert can deduce from the query in line (3) that the agent is considering SupplySparePartByShip.</Paragraph>
    <Paragraph position="1"> The &amp;quot;them&amp;quot; in the next query in line (5) could refer either to all six cruisers or to just the two listed in (4). Because the model does not predict the Diego Garcia query as relevant to the current plan context, it is recognized after search in the  -40metaplan tree as due instead to a SupplyPartBy-Plane plan, with the change in plan context implying the correct resolution of the anaphora and also suggesting the addition of the helpful information in (6). The metaplan model of the pragmatic context thus enables the NL processing to be more robust and cooperative.</Paragraph>
    <Paragraph position="2"> The Pragma system in which this metaplan model is being developed and tested makes use of the pragmatic model's predictions for suggesting corrections to ill-formed input. Given a suitable library of domain plans and an initial context, Pragma can expand its metaplan tree under heuristic control identifying nodes that match each new query in a coherent problem-solving dialogue and thereby building up a model of the agent's problem-solving behavior.</Paragraph>
    <Paragraph position="3"> A domain plan library for a subset of naval fleet operations plans and sets of examples in that domain have been built and tested. The resulting model has been used experimentally for dealing with input that is ill-formed due to a single localized error. Such queries can be represented as underspecified logical forms containing &amp;quot;wildcard&amp;quot; terms whose meaning is unknown due to the ill-formedness. By searching the metaplan tree for queries coherently related to the previous context, suggested fillers can be found for the unknown wildcards. For the roughly 20 examples worked with so far, Pragma returns between 1 and 3 suggested corrections for the ill-formed element in each sentence, found by searching for matching queries in its metaplan context model.</Paragraph>
  </Section>
  <Section position="7" start_page="0" end_page="0" type="metho">
    <SectionTitle>
6. EXTENSIONS TO THE MODEL AND
AREAS FOR FURTHER WORK
</SectionTitle>
    <Paragraph position="0"> This effort to capture further levels of structure in order to better model and predict the agent's behavior needs to be extended both to achieve further coverage of the expert advising domain and to develop models on the same level for other discourse settings. The current model also includes simplifying assumptions about agent knowledge and cooperativity that should be relaxed.</Paragraph>
    <Paragraph position="1"> Within the expert advising domain, further classes of metaplans are required to cover informing and evaluative behavior. While the expert can usually deduce the agent's plan-building progress from the queries, there are cases where that is not true. For example, an agent who was told that the nearest C1 frigate was the Wilson might respond &amp;quot;I don't want to use it.&amp;quot;, a problem-solving move whose goal is to help the expert track the agent's planning correctly, predicting queries about other ships rather than further exploration of that branch. Informing metaplans would model such actions whose purpose is to inform the expert about the agent's goals or constraints in order to facilitate the expert's plan tracking. Evaluative metaplans would capture queries whose purpose was not just establishing plan feasibility but comparing the cost of different feasible plans. Such queries can involve factors like fuel consumption rates that are not strictly plan preconditions. The typical patterns of movement in the metaplan tree are also different for evaluation, where the agent may compare two differently-instantiated build-plan nodes point for point, moving back and forth repeatedly, rather than following the typical feasibility pattern of depth-first exploration. Such a comparison pattern is highly structured, even though it would appear to the current model as patternless alternation between ask-pred-value queries on two different plan branches. Metaplans that capture that layer of problem-solving strategy would thus significantly extend the power of the model.</Paragraph>
    <Paragraph position="2"> Another important extension would be to work out the metaplan structure of other discourse settings. For an example closely related to expert advising, consider two people trying to work out a plan for a common goal; each one makes points in their discussion based on features of the possible plan classes, and the relationship between their statements and the plans and the strategy of their movements in the plan tree could be formalized in a similar system of metaplans.</Paragraph>
    <Paragraph position="3"> The current model also depends on a number of simplifying assumptions about the cooperativeness and knowledge of the agent and expert that should be relaxed to increase its generality. For example, the model assumes that both the expert and the agent have complete and accurate knowledge of the plans and their preconditions. As Pollack (1986) has shown, the agent's plan knowledge should instead be formulated in terms of the individual beliefs that define what it means to have a plan, so the model can handle cases where the agent's plans are incomplete or incorrect. Such a model of the agent's beliefs could also be a major factor in the heuristics of plan tracking, identifying, for example, predicates whose value the agent does not already know which therefore are more likely to be queried. The current model should also be extended to handle multiple goals on the agent's part, examples where the expert does not know in advance the agent's top-level goal, and cases of interactions between plans.</Paragraph>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
-41 -
</SectionTitle>
      <Paragraph position="0"> However, no matter how powerful the pragmatic modeling approach becomes, there is a practical limitation in the problem-solving setting on the amount of data available to the expert in the agent's queries. More powerful, higher level models require that the expert have appropriately more data about the agent's goals and problem-solving state. That tradeoff explains why an advisor who is also a friend can often be much more helpful than an anonymous expert whose domain knowledge may be similar but whose knowledge of the agent's goals and state is weaker. The goal for cooperative interfaces must be a flexible level of pragmatic modeling that can take full advantage of all the available knowledge about the agent and the recognizable elements of discourse structure while still avoiding having to create high-level structures for which the data is not available.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML