File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/90/c90-3061_metho.xml

Size: 9,688 bytes

Last Modified: 2025-10-06 14:12:32

<?xml version="1.0" standalone="yes"?>
<Paper uid="C90-3061">
  <Title>Modelling Variations in Goal-Directed Dialogue</Title>
  <Section position="2" start_page="0" end_page="0" type="metho">
    <SectionTitle>
2 The Dialogue Domain
</SectionTitle>
    <Paragraph position="0"> The task domain \['or the dialogues involves navigating around a simple map containing approximately fifteen landmarks. Two participants are given maps which differ slightly; each map may contain some features omitted on the other, and features in the same location on the two maps may have different labels. The first participant also has a route from the labelled start point to one map feature. The second participant must draw the * Supported by a scholarship from the Marshall Aid Conunemoration Commission. Thanks to Chris Mellish for supervising me and Alison Cawsey for helpful comments, route on his or her map. In this task, the participants must cooperate because neither of them knows enough about the other's map to be able to construct accurate descriptions. At the same time, small changes in the map test how participants hrmdle referential ambiguities, how information is carried from one part of the dialogue to the next, and when agents decide to replan rather than repair an existing plan. Despite the possibilities for referential difficulties, this task minimizes the use of real world knowledge as long as all participants understand how to navigate. The task is simple enough to be completed in computer-simulated dialogue, but admits the dialogue variations to be tested in the research. null</Paragraph>
  </Section>
  <Section position="3" start_page="0" end_page="0" type="metho">
    <SectionTitle>
3 The Central Idea
</SectionTitle>
    <Paragraph position="0"> The central idea behind the research is that agents need multiple strategies for engaging in goM-directed dialogue because they do not necessarily know the best way to communicate with a given partner. Self \[5\] shows that dialogue is crucial where neither agent has all of the relevant domain knowledge. Dialogue is also necessary for any explanations where agents don't have accurate models of their partners \[3\]. Even if agents have all of the relevant domain knowledge, they may not know how best to present that knowledge, especially since explanations are about exactly that part of the task which is not mutually known to the dialogue partners \[1\]. Shadbolt \[6\] presents evidence that humans handle uncertainties about what information to give and how to present it by having a set of strategies for each aspect of the dialogue. Then the agent can tailor explanations to a particular partner by using the strategy that best fits the situation. For instance, human agents who believe that much domain information will have to be communicated structure their presentation carefully and often elicit feedback from the partner, like participant A of Sha.dbolt's \[6\] example 6.16: A: have you got wee palm trees aye? B: uhu A: right go just + a wee bit along to them have you got a swamp? n~ er A: right well just go + have you got a watt&gt; fall7 Al,~ents who believe that most domain in\[brmation is ki~own to their partner are more likely to rely on interru ptions fl'om the partner and replanning, a.s in example</Paragraph>
    <Paragraph position="2"> and &amp;quot;around&amp;quot;. The agents converse in an artificial language resembling their shared planning language, but substituting referring expressions for internal feature identifiers. Under these constraints, the agents use dialogue strategies to decide on the content and form of the dialogue. The existing system is a prototype de-.</Paragraph>
    <Paragraph position="3"> signed to show that incorporating such strategies can explain some variations in human dialogue and make agents more flexible. An improved set of strategies is being extracted from the corpus of human dialogues.</Paragraph>
    <Paragraph position="4"> The end result of the project will be a theory of how communicative strategies control variations in dialogue, and software in which computer-simulated agents use these strategies to complete the navigation task.</Paragraph>
    <Paragraph position="5"> A: and then q- go up about and over the bridge B: I've not got a bridge I've got a lion's den and a wood A: have you got a river'? Ol~e way to make computer generated explanations look m(,re natural is to plan them using strategies modelled on ~.he human ones. Although strategies like these could be built into the way a system plans an explanation, making strategy choices explicit allows the strategies themselves to be investigated, providiug a way to test oul. how variatio,~s affect the ensuing dialogue. The go~d of the present research is both to show how using dialogue strategies can improve tile &amp;quot;naturalness&amp;quot; of computer-generated task explanations and to provide insight into the dialogue strategies which humans use and how they interact.</Paragraph>
  </Section>
  <Section position="4" start_page="0" end_page="0" type="metho">
    <SectionTitle>
4 The Project
</SectionTitle>
    <Paragraph position="0"> The project involves creating a theory of human dialogist strategies a.m:l modelling it. usiug two cotnputer processes that converse. Communication for the comi)llter agents, bg~sed or~ the model in Power \[4\] and I\[oughtou \[2\], is simplified in a number of ways. A convener wakes the agents in turn and interactions are made by placing messages in mailboxes, leaving out the complications of turn-taking and interruptiom Rather than reason from &amp;quot;visual&amp;quot; images of the maps, agents begin with sets of beliefs about the positional relationship.~ among objects and share knowledge about both dialogue conventions, expressed a.s interactio'n flames \[2\], ~md navigational concel)tS like &amp;quot;toward&amp;quot;, &amp;quot;between&amp;quot;,</Paragraph>
  </Section>
  <Section position="5" start_page="0" end_page="325" type="metho">
    <SectionTitle>
5 The Program
</SectionTitle>
    <Paragraph position="0"> The current version of the software uses dialogue strategies adapted from Shadbolt \[6\]. tte lists seven dif~ fercnt aspects of dialogue along which strategies may be developed. Agents may vary strategies tbr feedback (how they handle the partner's utterances), speciJicao lion (how they construct and resolve referring expressions), o~lology (how they decide from what features are available how to construct route descriptions), foe~zs (the amount of explMt focus intbrmation given), differonce (the effort spent determining what the partner's utterances mean), decenlering (whether intbrmation is presented using the agent's or the partner's names tbr fe.atures), and hypolhesi~ formalion (the effort spent making hypotheses about the partner's knowledge).</Paragraph>
    <Paragraph position="1"> Agents choose strategies tbr each of these aspects depending on how explicit they want to be, which in turn depends on how likely the partner is to misunderstand each aspect of the dialogue. Some of Shadbolt's aspects are interrelated; for instance, agents that provide explicit information about the current focus do not need to construct referring expressions as carefully as agents who provide no focusing information at all. Our own work divides the strategies slightly differently so that they ea.n be divided into sets depending on whether they atfect planning the dialogue interaction, planning the content, planning the presentation, or realizing references; the goal is to make the strategies ms modular as possible so that they can be modelled simply. Each simulated agent takes on a set of strategies for tile duration of' a dialogue. Currently, the prototype varies how much intbrmation about tile structttre of the dialogue is explicitly given, which features are included in a route description depending on a model of the partner's be- null liefs, how often an agent allows interactions from the partner, and how much repair an agent is willing to do rather than replan a description. The agents also use heuristics to prefer plans where the partner already understands the plan's prerequisites. The output of the program is a simulated dialogue where each agent keeps the same strategies for the course of the dialogue; an obvious future step is to allow agents to adapt to a particular partner or part of the task by varying their strategies within a dialogue.</Paragraph>
  </Section>
  <Section position="6" start_page="325" end_page="325" type="metho">
    <SectionTitle>
6 Examples
</SectionTitle>
    <Paragraph position="0"> The system currently has several strategies which affect how much structuring information is given in a dialogue and how often feedback is elicited from the partner. The following dialogue, an English gloss of two simulated agents conversing, shows how agent A might act if it believed that the maps had many differences: A: I'nl going to tell you how to get to the buried treasure. I'm going to tell you how to navigate the first part of the route. Do you have a pahn beach? B: Yes.</Paragraph>
    <Paragraph position="1"> A: Do you have a swamp? B: No.</Paragraph>
    <Paragraph position="2"> A: Do you have a waterfall? B: Yes.</Paragraph>
    <Paragraph position="3"> A: The swamp is between the pMm beach and the waterfall. OK? B: Yes.</Paragraph>
    <Paragraph position="4"> A: The route goes to the left of the pMm beach and around the swamp. OK? B: Yes.</Paragraph>
    <Paragraph position="5"> If agent A believes that there will be few misunderstandings, or that B will understand enough to say what it misunderstood, it might choose to give information first and repair afterwards:</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML