File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/relat/98/j98-3002_relat.xml
Size: 12,145 bytes
Last Modified: 2025-10-06 14:16:02
<?xml version="1.0" standalone="yes"?> <Paper uid="J98-3002"> <Title>Collaborative Response Generation in Planning Dialogues</Title> <Section position="3" start_page="356" end_page="359" type="relat"> <SectionTitle> 2. Related Work </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="356" end_page="358" type="sub_section"> <SectionTitle> 2.1 Modeling Collaboration </SectionTitle> <Paragraph position="0"> Allen (1991) proposed a discourse model that differentiates among the shared and individual beliefs that agents might hold during collaboration. His model consists of six plan modalities, organized hierarchically with inheritance in order to accommodate the different states of beliefs during collaboration. The plan modalities include plan fragments that are private to an agent, those proposed by an agent but not yet acknowledged by the other, those proposed by an agent and acknowledged but not yet accepted by the other agent, and a shared plan between the two agents. Plan fragments move from the lower-level modalities (private plans) to the top-level shared plans if appropriate acknowledgment/acceptance is given. Although Allen's framework provides a good basis for representing the state of collaborative planning, it does not specify how the collaborative planning process should be carried out and how responses should be generated when disagreements arise in such planning dialogues. null Grosz and Sidner (1990) developed a formal model that specifies the beliefs and intentions that must be held by collaborative agents in order for them to construct a shared plan. Their model, dubbed the SharedPlan model eliminates the &quot;masterslave assumption&quot; typically made by plan recognition work prior to their effort. Thus, instead of treating collaborative planning as having one controlling agent and one reactive agent where the former has absolute control over the formation of the plan and the latter is involved only in the execution of the plan, they view collaborative planning as &quot;two agents develop\[ing\] a plan together rather than merely execut\[ing\] the existing plan of one of them&quot; (page 427). Lochbaum (1994) developed an algorithm for modeling discourse using this SharedPlan model and showed how information-seeking dialogues could be modeled in terms of attempts to satisfy knowledge pre1 Although the examples that illustrate CORE's response generation process in this paper are all taken from the university course advisement domain, the strategies that we identified can easily be applied to other collaborative planning domains. For examples of how the system can be applied to the financial advisement and library information retrieval domains, see Section 8.1, and to the air traffic control domain, see Chu-Carroll and Carberry (1996).</Paragraph> <Paragraph position="1"> Computational Linguistics Volume 24, Number 3 conditions (Lochbaum 1995). Grosz and Kraus (1996) extended the SharedPlan model to handle actions involving groups of agents and complex actions that decompose into multiagent actions. They proposed a formalism for representing collaborative agents' SharedPlans using three sources of information: 1) the agents' intention to do some actions, 2) their intentions that other agents will carry out some actions, and 3) their intention that the joint activity will be successful. However, in their model the agents will avoid adopting conflicting intentions, instead of trying to resolve them.</Paragraph> <Paragraph position="2"> Sidner analyzed multiagent collaborative planning discourse and formulated an artificial language for modeling such discourse using proposal/acceptance and proposal/rejection sequences (Sidner 1992, 1994). In other words, a multiagent collaborative planning process is represented in her language as one agent making a proposal (of a certain action or belief) to the other agents, and the other agents either accepting or rejecting this proposal. Each action (such as Propose or Accept) is represented by a message sent from one agent to another, which corresponds to the natural language utterances in collaborative planning discourse. Associated with each message is a set of actions that modifies the stack of open beliefs, rejected beliefs, individual beliefs, and mutual beliefs, that facilitate the process of belief revision. However, it was not Sidner's intention to specify conflict detection and resolution strategies for agents involved in collaborative interactions. Our Propose-Evaluate-Modify framework, to be discussed in Section 3.2, builds on this notion of proposal/acceptance and proposal/rejection sequences during collaborative planning.</Paragraph> <Paragraph position="3"> Walker (1996b) also developed a model of collaborative planning in which agents propose options, deliberate on proposals that have been made, and either accept or reject proposals. Walker argues against what she terms the redundancy constraint in discourse (the constraint that redundant information should be omitted). She notes that this constraint erroneously assumes that a hearer will automatically accept claims that are presented to him, and would cause the speaker to believe that it is unnecessary to present evidence that the hearer already knows or should be able to infer (even though this evidence may not currently be part of his attentional focus). Walker investigated the efficiency of different communicative strategies, particularly the use of informationally redundant utterances (IRU's), under different assumptions about resource limits and processing costs, and her work suggests that effective use of IRU's can reduce effort during collaborative planning and negotiation.</Paragraph> <Paragraph position="4"> Heeman and Hirst (1995) investigated collaboration on referring expressions of objects copresent with the dialogue participants. They viewed the processes of building referring expressions and identifying their referents as a collaborative activity, and modeled them in a plan-based paradigm. Their model allows for negotiation in selecting amongst multiple candidate referents; however, such negotiation is restricted to the disambiguation process, instead of a negotiation process in which agents try to resolve conflicting beliefs.</Paragraph> <Paragraph position="5"> Edmonds (1994) studied an aspect of collaboration similar to that studied by Heeman and Hirst. However, he was concerned with collaborating on references to objects that are not mutually known to the dialogue participants (such as references to landmarks in direction-giving dialogues). Again, Edmonds captures referent identification as a collaborative process and models it within the planning/plan recognition paradigms. However, he focuses on situations in which an agent's first attempt at describing a referent is considered insufficient by the recipient and the agents collaborate on expanding the description to provide further information, and does not consider cases in which conflicts arise between the agents during this process.</Paragraph> <Paragraph position="6"> Traum (1994) analyzed collaborative task-oriented dialogues and developed a theory of conversational acts that models conversation using actions at four different Chu-Carroll and Carberry Response Generation in Planning Dialogues levels: turn-taking acts, grounding acts, core speech acts, and argumentation acts.</Paragraph> <Paragraph position="7"> However, his work focuses on the recognition of such actions, in particular grounding acts, and utilizes a simple dialogue management model to determine appropriate acknowledgments from the system.</Paragraph> </Section> <Section position="2" start_page="358" end_page="359" type="sub_section"> <SectionTitle> 2.2 Cooperative Response Generation </SectionTitle> <Paragraph position="0"> Many researchers (McKeown, Wish, and Matthews 1985; Paris 1988; McCoy 1988; Sarner and Carberry 1990; Zukerman and McConachy 1993; Logan et al. 1994) have argued that information from the user model should affect a generation system's decision on what to say and how to say it. One user model attribute with such an effect is the user's domain knowledge, which Paris (1988) argues not only influences the amount of information given (based on Grice's Maxim of Quantity \[Grice 1975\]), but also the kind of information provided. McCoy(1988) uses the system's model of the user's domain knowledge to determine possible reasons for a detected misconception and to provide appropriate explanations to correct the misconception. Cawsey (1990) also uses a model of user domain knowledge to determine whether or not a user knows a concept in her tutorial system, and thereby determine whether further explanation is required. Sarner and Carberry (1990) take into account the user's possible plans and goals to help the system determine the user's perspective and provide definitions suitable to the user's needs. McKeown, Wish, and Matthews (1985) inferred the user's goal from her utterances and tailored the system's response to that particular viewpoint. In addition, Zukerman and McConachy (1993) took into account a user's possible inferences in generating concise discourse.</Paragraph> <Paragraph position="1"> Logan et al., in developing their automated librarian (Cawsey et al. 1993; Logan et al. 1994), introduced the idea of utilizing a belief revision mechanism (Galliers 1992) to predict whether a given set of evidence is sufficient to change a user's existing belief. They argued that in the information retrieval dialogues they analyzed, &quot;in no cases does negotiation extend beyond the initial belief conflict and its immediate resolution&quot; (Logan et al. 1994, 141); thus they do not provide a mechanism for extended collaborative negotiation. On the other hand, our analysis of naturally occurring collaborative negotiation dialogues shows that conflict resolution does extend beyond a single exchange of conflicting beliefs; therefore we employ a recursive Propose-Evaluate-Modify framework that allows for extended negotiation. Furthermore, their system deals with one conflict at a time, while our model is capable of selecting a focus in its pursuit of conflict resolution when multiple conflicts arise.</Paragraph> <Paragraph position="2"> Moore and Paris (1993) developed a text planner that captures both intentional and rhetorical information. Since their system includes a Persuade operator for convincing a user to perform an action, it does not assume that the hearer would perform a recommended action without additional motivation. However, although they provide a mechanism for responding to requests for further information, they do not identify strategies for negotiating with the user if the user expresses conflict with the system's recommendation.</Paragraph> <Paragraph position="3"> Raskutti and Zukerman (1994) developed a system that generates disambiguating and information-seeking queries during collaborative planning activities. In situations where their system infers more than one plausible goal from the user's utterances, it generates disambiguating queries to identify the user's intended goal. In cases where a single goal is recognized, but contains insufficient details for the system to construct a plan to achieve this goal, their system generates information-seeking queries to elicit additional information from the user in order to further constrain the user's goal.</Paragraph> <Paragraph position="4"> Thus, their system focuses on cooperative response generation in scenarios where the user does not provide sufficient information in his proposal to allow the agents Computational Linguistics Volume 24, Number 3 to immediately adopt his proposed actions. On the other hand, our system focuses on collaborative response generation in situations where insufficient information is available to determine the acceptance of an unambiguously recognized proposal and those where a conflict is detected between the agents with respect to the proposal. -</Paragraph> </Section> </Section> class="xml-element"></Paper>