File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/93/p93-1039_metho.xml
Size: 12,289 bytes
Last Modified: 2025-10-06 14:13:31
<?xml version="1.0" standalone="yes"?> <Paper uid="P93-1039"> <Title>RESPONDING TO USER QUERIES IN A COLLABORATIVE ENVIRONMENT*</Title> <Section position="4" start_page="0" end_page="0" type="metho"> <SectionTitle> 2 The Tripartite Model </SectionTitle> <Paragraph position="0"> Lambert and Carberry proposed a plan-based tripartite model of expert/novice consultation dialogue which includes a domain level, a problem-solving level, and a discourse level \[6\]. The domain level represents the system's beliefs about the user's plan for achieving some goal in the application domain. The problem-solving level encodes the system's beliefs about how both agents are going about constructing the domain plan. The discourse level represents the system's beliefs about both agents' communicative actions. Lambert developed a plan recognition algorithm that uses contextual knowledge, world knowledge, linguistic clues, and a library of generic recipes for actions to analyze utterances and construct a dialogue model\[6\].</Paragraph> <Paragraph position="1"> Lambert's system automatically adds to the dialogue model all actions inferred from an utterance. However, we argue that in a collaborative environment, the system should only accept the proposed additions if the system believes that they are appropriate. Hence, we separate the dialogue model into an existing dialogue model and a proposed model, where the former constitutes the shared plan agreed upon by both agents, and the latter the newly proposed actions that have not yet been confirmed.</Paragraph> <Paragraph position="2"> Suppose earlier dialogue suggests that the user has the goal of getting a Master's degree in CS (Get-Masters(U, CS)). Figure 1 illustrates the dialogue model that would be built after the following utterances by Lambert's plan recognition algorithm modified to accommodate the separation of the existing and proposed dialogue models, and augmented with a relaxation algorithm to recognize ill-formed plans\[2\].</Paragraph> <Paragraph position="3"> U: I want to satisfy my seminar course requirement.</Paragraph> <Paragraph position="4"> Who's teaching CS689?</Paragraph> </Section> <Section position="5" start_page="0" end_page="280" type="metho"> <SectionTitle> 3 The Evaluator </SectionTitle> <Paragraph position="0"> A collaborative system should only incorporate proposed actions into an existing plan if they are considered appropriate. This decision is made by the evaluator, which will be discussed in this section. This paper only considers cases in which the user's proposal contains an infeasible action (one that cannot be performed) or would result in an ill-formed plan (one whose actions do not contribute to one another as intended)\[9\].</Paragraph> <Paragraph position="1"> We argue that the evaluator, in order to check for erroneous plans/goals, only needs to examine actions in the proposed model, since actions in the existing model would have been checked when they were proposed.</Paragraph> <Paragraph position="2"> When a chain of actions is proposed, the evaluator starts examining from the top-most action so that the most general action that is inappropriate will be addressed.</Paragraph> <Paragraph position="3"> Domain Level ~~'-\]-i _~o_,o_-_~..~_o~ :m_, .......</Paragraph> <Paragraph position="4"> ........ ~ .... C/--' ! Is~-s,~,,~-co~,~,cs) ~. .......... t ~ , &quot;. P~b, 1 era- So l v-mg_Le v C/1 .............. &quot;~ , iTal~_Com,~(U,CS689) p...~ ......... ........ S--- - -.-.-.~.~- -.~- - --i.~.,~ 9~d-r~c~.s.s~-s*,mo,,~Co,~eJ,cs)~ \[--i&quot;: . - ......... , # : ........ i \[ Build -Plma (U,S,TaI~C/-Course(U,(~S 689)) I ....... : :&quot; &quot;~o &quot;on ' lna ~tiat*- Singl e~ V at~l,S,_fae,TcaC/ bt s~fae,CS 689)) ', .: V~po~d~ :~__Ao~ , &quot; .......... : ........................... Goal: ,: ................................. -,7:: .............................</Paragraph> <Paragraph position="6"> The evaluator checks whether the existing and proposed actions together constitute a well-formed plan, one in which the children of each action contribute to their parent action. Therefore, for each pair of actions, the evaluator checks against its recipe library to determine if their parent-child relationship holds. The evaluator also checks whether each additional action is feasible by examining whether its applicability conditions are satisfied and its preconditions ~ can be satisfied.</Paragraph> <Paragraph position="7"> We contend that well-formedness should be checked before feasibility since the feasibility of an action that does not contribute to its parent action is irrelevant. Similarly, the well-formedness of a plan that attempts to achieve an infeasible goal is also irrelevant. Therefore, we argue that the processes of checking well-formedness and feasibility should be interleaved in order to address the most general action that is inappropriate. We show how this interleaved process works by referring back to figure 1.</Paragraph> <Paragraph position="8"> Suppose the system believes that CS689 is not a seminar course. The evaluation process starts from Satisfy-Seminar-Course(U, CS), the top-most action in the proposed domain model. The system's knowledge indicates that Satisfy-Seminar-Course(U, CS) contributes to Get-Masters(U, CS). The system also believes that the applicability conditions and the preconditions for the Satisfy-Seminar-Course domain plan are satisfied, indicating that the action is feasible. However, the system's recipe library gives no reason to believe that Take-Course( U, CS689) contributes to Satisfy-Seminar-Course(U, CS), since CS689 is not a seminar course. The evaluator then decides that this pair of proposed actions would make the domain plan ill-formed.</Paragraph> </Section> <Section position="6" start_page="280" end_page="281" type="metho"> <SectionTitle> 4 When the Proposal is Erroneous </SectionTitle> <Paragraph position="0"> The goal selector's task is to determine, based on the current dialogue model, an intentional goal \[8\] that is most appropriate for the system to pursue. An intentional goal could be to directly respond to the user's utterance, a Both applicability conditions and preconditions are prerequisites for executing a recipe. However, it is unreasonable to attempt to satisfy an applicability condition whereas preconditions can be planned for.</Paragraph> <Paragraph position="1"> to correct a user's misconception, to provide a better alternative, etc. In this paper we only discuss the goal selector's task when the user has an erroneous plan/goal. In a collaborative environment, if the system decides that the proposed model is infeasible/ill-formed, it should refuse to accept the additions and suggest modifications to the proposal by entering a negotiation subdialogue. For this purpose, we developed recipes for two problem-solving actions, Correct-Goal and Correct-Inference, each a specialization of a Modify-Proposal action. We illustrate the Correct-Inference action in more detail.</Paragraph> <Paragraph position="2"> We show two problem-solving recipes, Correct-Inference and Modify-Acts, in figure 2. The Correct-Inference recipe is applicable when _s2 believes that _actl contributes to achieving _act2, while _sl believes that such a relationship does not hold. The goal is to make the resultant plan a well-formed plan; therefore, its body consists of an action Modify-Acts that deletes the problematic components of the plan, and Insert-Correction, that inserts new actions/variables into the plan. One precondition in Modify-Acts is believe(_s2, -~contributes(_act l,-act2 ) ) (note that in Correct-Inference, _s2 believes contributes(-actl,-act2)), and the change in _s2's belief can be accomplished by invoking the discourse level action Inform so that _sl can convey the ill-formedness to _s2. This Inform act may lead to further negotiation about whether _actl contributes to _act2. Only when _sl receives a positive feedback from _s2, indicating that _s2 accepts _sl's belief, can _sl assume that the proposed actions can be modified.</Paragraph> <Paragraph position="3"> Earlier discussion shows that the proposed actions in figure 1 would make the domain plan ill-formed. Therefore, the goal selector posts a goal to modify the proposal, which causes the Correct-Inference recipe in figure 2 to be selected. The variables _actl and _act2 are bound to Take-Course( U, CS689 ) and Satisfy-Seminar-Course( U, CS ), respectively, since the system believes that the former does not contribute to the latter.</Paragraph> <Paragraph position="4"> Figure 3 shows how we envision the planner to expand on the Correct-Inference recipe, which results in the generation of the following two utterances: (1)S&quot; Taking CS689 does not contribute to satisfying the seminar course requirement, (2) CS689 is not a seminar course.</Paragraph> <Paragraph position="5"> The action Inform(_sl,_s2,_prop) has the goal believe(_s2,_prop); therefore, utterance (1) is generated by executing the Inform action as an attempt to satisfy the preconditions for the Modify-Acts recipe. Utterance (2) results from the Address-Believability action, which is a subaction of Inform, to support the claim in (1). The problem-solving and discourse levels in figure 3 operate on the entire dialogue model shown in figure 1, since the evaluation process acts upon this model. Due to this nature, the evaluation process can be viewed as a metaplanning process, and when the goal of this process is achieved, the modified dialogue model is returned to.</Paragraph> <Paragraph position="6"> Now consider the case in which the user continues by accepting utterances (1) and (2), which satisfies the pre-condition of Modify-Acts. Modify-Acts has two specializations, Remove-Act, which removes the incorrect action (and all of its children), and Alter-Act, which generalizes the proposed action so that the plan will be well-formed.</Paragraph> <Paragraph position="7"> Since Take-Course contributes to Satisfy-Seminar-Course as long as the course is a seminar course, the system generalizes the user's proposed action by replacing CS689 with a variable. This variable may be instantiated by the Insert-Correction subaction of Correct-Inference when the dialogue continues. Note that our model accounts for why the user's original question about the instructor of CS689 is never answered --a conflict was detected that made the question superfluous.</Paragraph> </Section> <Section position="7" start_page="281" end_page="281" type="metho"> <SectionTitle> 5 Related Work </SectionTitle> <Paragraph position="0"> Several researchers have studied collaboration \[1, 3, 10\] and Allen proposed different plan modalities depending on whether a plan fragment is shared, proposed and acknowledged, or merely private \[1\]. However, they have emphasized discourse analysis and none has provided a plan-based framework for proposal negotiation, speci fled appropriate system response during collaboration, or accounted for why a question might never be answered.</Paragraph> <Paragraph position="1"> Litman and Allen used discourse meta-plans to handle a class of correction subdialogues \[7\]. However, their Correct-Plan only addressed cases in which an agent adds a repair step to a pre-existing plan that does not execute as expected. Thus their meta-plans do not handle correction of proposed additions to the dialogue model (since this generally does not involve adding a step to the proposal).</Paragraph> <Paragraph position="2"> Furthermore, they were only concerned with understanding utterances, not with generating appropriate responses. The work in \[5, 1 I, 9\] addressed generating cooperative responses and responding to plan-based misconceptions, but did not capture these within an overall collaborative system that must negotiate proposals with the user. Heeman \[4\] used meta-plans to account for collaboration on referring expressions. We have addressed collaboration in constructing the user's task-related plan, captured cooperative responses and negotiation of how the plan should be constructed, and provided an accounting for why a user's question may never be answered.</Paragraph> </Section> class="xml-element"></Paper>