File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/87/e87-1031_concl.xml
Size: 11,290 bytes
Last Modified: 2025-10-06 13:56:09
<?xml version="1.0" standalone="yes"?> <Paper uid="E87-1031"> <Title>PLANNING FOR PROBLEM FORMULATION IN ADVICE-GIVING DIALOGUE</Title> <Section position="6" start_page="187" end_page="189" type="concl"> <SectionTitle> * Revision </SectionTitle> <Paragraph position="0"> Our plan generation is simplified because the execution of one subgoal cannot invalidate another, so a constant monitoring of preconditions is obviated; but this is more than made up by the difficulW in accommodating possible changes to the plan necessitated ble the user's input. The choice of a planning process which either expands or repairs an existing plan reflects our third strategy. Indeed, the natural expansion of a plan can be seen as corresponding to the expected behavior of the user and the revisions only happen when the user takes the initiative. In this approach, the reasoning which takes place when the user follows the expected course is reduced to its minimum and only digressions require extra efforts.</Paragraph> <Paragraph position="1"> Interactions with the user are handled through communicative games and a special metaplan reacts when a communicative game appears on top of the Agenda. This metaplan triggers the execution of the game and annlyres the outcome of the execution to decide consequently the updates to the Agenda. If the game has completely succeeded, /.e. all responses of the user fit the expectations, the communicative game is simply removed from the Agenda and replaced by ok-~e~'t actions for each new position expressed by the user. Otherwise there exist unexpected responses and different actions are pushed onto the Agenda in such a way that the expected positions will be analysed first by means of ok-react actions, then unexpected positions concerning the current focus and unexpected positions outside the current focus by means of not-ok-react actions. For all these not-ok-re~ct actions, there are metaplans to consider the precise situation and to decide an appropriate reaction, with rearrangement and other modifications made as necessary to the Agenda of pending actions. Delaying the expansion of plans until it becomes necessary to execute them facilitates taking into account the effect of the user's responses on goals not yet addressed, as in, for example, the verification of constraints which the various parts of the problem definition impose on one another, or in noticing that the value of a missing variable can be computed from the combination of other values the user has already given.</Paragraph> <Paragraph position="2"> What sorts of snags can occur in a dialogue that might force the system to revise its plans? Our problem model provides certain relations which must hold between values provided by the user. The user might, however, give a value which is in conflict either with one of these constraints or with values previously given. We must point out the sticking-point and help the user resolve the conflict. The serify-cor~straird metaplan pushes a me~con~trsint-game onto the Agenda. This game will present the local constraint which led to refusing the new position expressed by the user and the justifications which relate this local constraint to the global constraints of the problem model. Consider, for instance, a simple equality constraint between the total amount and the sum of the amounts of the parts. With a $20,000 total-amount and a $5,000 amount for the eme~encp/um/, a $16,000 assignvalue position for the amount of the ca.sh-~cd would bring system -&quot;The amount of your cash-need should be less than or equal to $15.000 for consistency with the total amount.&quot; We also have preferences (and sometimes obligations) in the ordering of the various points to be addressed during the conversation, but the user might not respect them. For instance, the user might at any moment decide to change subject, in which case we must consider the effects of the switch: if, for example, she/he asks to back up in the conversation to change something which was of necessity addressed before the current subject, this could force revision of all the values given since that point up to the present. Based on the following situations, we identify three classes of change-~b'ject metaplans, which can trigger when the new position expressed by the user bears on a context which is not the current focus and modify accordingly the Agenda: -the current focus must be treated before the new subject introduced by the user (according to sequencing policies in the problem model), -the subject the user would like to examine has already been treated and a modification would have consequences on what has been discussed since, - there is no sequencing difficulty.</Paragraph> <Paragraph position="3"> If the user asks for explanation of some point which she/he doesn't understand, the system enters a digression in the dialogue, after which the original topic is resumed. Low-Level Planning and the EXECUTOR As discussed above, the decomposition of a plan often engenders the need for interaction with the user. This is done through the communicative games. Basically a communicative game aims at representing a pair of turns between the user and the system, e.g., question/answer.</Paragraph> <Paragraph position="4"> (In fact, we also need to model one-turn games for the transitions between phaees, e.g., introduction/resumption of a new/old subject). Although we can never be sure the second turn will take place as desired, the interest of representing games is to provide local expectations for the interpretation of the response of the user. It should be noted that our intention in using these communicative games is not to impose a structure on the dialogue between the user and the system: these games correspond to an ideal dialogue in which the user would always respond as expected. The actual dialogue is a succession of communicative games which may fail, thereby reactivating the high-level planning process described in the previous section.</Paragraph> <Paragraph position="5"> With each communicative game is associated an outmeaning which indicates the semantic content to be conveyed to the user when the game is executed. This oralmeaning is expressed in the internal language of the dialogue module in which mostly appear objects of the problem model. Adequate references in logical form to these objects are provided by the GENERATOR of the dialogue module. The referring process utilizes: - the semantic representation of the World; - the Focus-Stack, especially the current focus which may be elliptically referred to; - the conceptual state of the user.</Paragraph> <Paragraph position="6"> This conceptual state is based on initial assumptions, e.g., whether a concept is a prior/familiar to the user, and on what has already transpired during the dialogue, e.g.</Paragraph> <Paragraph position="7"> whether a concept has already been explained, or how the user has previously referred to an object of the problem model. The GENERATOR takes this information to adapt its description and link unknown concepts to familiar ones. Thus the user progressively learns what the problem model consists of and how it relates to her/his familiar concepts: a simple but efficient approach to the evolving interaction between the user and the system held above as our fourth desirable strategy for person-machine advice-giving dialogues.</Paragraph> <Paragraph position="8"> Symmetrically a communicative game is also characterized by an ia-ezpeeted meaning which stands for the expected response of the user, usually in terms of positions on the current focus or on parts of the current focus. The user's sentence is put into logical form by the natural language front-end and poseible meanings are proposed by the INTERPRETER. The latter has to determine which object of the problem model the description of the user could refer to. Each interpretation attempt is done within a context, that is a particular object which is the root of the search process. Interpretation is based on two search strategies: the first uses specificat/on links, while the second uses d~cr/m,'~nt properties and re~'rement links. Two types of reference can be recognized. Direct reference uses only the first strategy following the R~','fwah'on links starting from the context object and allows for elliptical answers to questions. Indirect reference uses successively both strategies: a search based on the dimerirnlnant properties determines candidate objects with a ~q~'rement link to the context object, then these candidates constitute the starting points for searching along apecificat/on links. The user does not have the same structured view of the financial world as the system do, and hence will not necessarily refer to things as we would like. The user will talk about Uthe car I want to buy in five years&quot; which requires a cash-need. Interpretation attempts are ordered according to the stack of loci: the most salient focus (or layer of loci) is selected as context (or set of contexts), then the deeper foci are successively tried. The INTERPRETER only tries a deeper focus if no interpretation has been found at a higher layer. Moreover, for each layer, the INTERPRETER tries to solve the direct reference before the indirect one and returns all possible interpretations within the first layer and type of reference which permitted to solve the reference. The structure of past loci partly reflects the evolution of our task structure \[Grosz 1985\] and allows the user to refer back to past segments of the dialogue.</Paragraph> <Paragraph position="9"> This structure is more supple than a mechanism which relies solely on unachieved goals because not only is the focus of a completed task not lost, but its location within this structure is influenced by the problem model in order to optimize subsequent recovery.</Paragraph> <Paragraph position="10"> Additional knowledge is contained in game descriptions: a feature in-react complements in-ezpreted in providing a set of game-specific rules for interpreting the literal meaning of the user's response returned by the INTERPRETER into its intended meaning within the particular game considered. A simple example consists of transformation rules for yes-ok/no answers depending on the game.</Paragraph> <Paragraph position="11"> Conclusion This work incorporates planning by the system at a high level of dialogue, and nevertheless leaves a great deal of initiative to the user. This flexibility is enhanced by the wide range of input styles which are allowed by the interpretation of input according to focus and indirect reference. At the moment we have a prototype of a dialogue module written in Prolog which implements general strategies for person-machine advice-giving dialogue. The naturM-language front-end, written in C, has been interfaced with the prototype, but the generation side would require further investigation. Generalizing the planning component and integrating more sophisticated plan recognition techniques are some of the other issues addressed in a next prototype. Work is also under way to extend the concept base in our knowledge world to enrich the conversation with the user.</Paragraph> </Section> class="xml-element"></Paper>