File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/98/p98-1103_metho.xml
Size: 19,432 bytes
Last Modified: 2025-10-06 14:14:54
<?xml version="1.0" standalone="yes"?> <Paper uid="P98-1103"> <Title>Context Management with Topics for Spoken Dialogue Systems</Title> <Section position="4" start_page="0" end_page="631" type="metho"> <SectionTitle> 2 Previous research </SectionTitle> <Paragraph position="0"> Previous research on using contextual information in spoken language systems has mainly dealt with speech acts (Nagata and Morimoto, 1994; Reithinger and Maier, 1995; MSller, 1996). In dialogue systems, speech acts seem to provide a reasonable first approximation of the utterance meaning: they abstract over possible linguistic realisations and, dealing with the illocutionary force of utterances, can also be regarded as a domain-independent aspect of communication. 2 2Of course, most dialogue systems include domain dependent acts to cope with the particular requirements of the domain, cf.Alexandersson (1996). Speech acts are also related to the task: information providing, appointment negotiat- null However, speech acts concern a rather abstract level of utterance modelling: they represent the speakers' intentions, but ignore the semantic content of the utterance. Consequently, context models which use only speech act information tend to be less specific and hence less accurate. Nagata and Morimoto (1994) report prediction accuracy of 61.7 %, 77.5 % and 85.1% for the first, second and third best dialogue act (in their terminology: Illocutionary Force Type) prediction, respectively, while Reithinger and Maier (1995) report the corresponding accuracy rates as 40.28 %, 59.62 % and 71.93 %, respectively. The latter used structurally varied dialogues in their tests and noted that deviations from the defined dialogue structures made the recognition accuracy drop drastically.</Paragraph> <Paragraph position="1"> To overcome prediction inaccuracies, speech act based context models are accompanied with the information about the task or the actual words used.</Paragraph> <Paragraph position="2"> Reithinger and Maier (1995) describe plan-based repairs, while MSller (1996) argues in favour of domain knowledge. Qu et al. (1996) show that to minimize cumulative contextual errors, the best method, with 71.3% accuracy, is the Jumping Context approach which relies on syntactic and semantic information of the input utterance rather than strict prediction of dialogue act sequences. Recently also keyword-based topic identification has been applied to dialogue move (dialogue act) recognition (Garner, 1997).</Paragraph> <Paragraph position="3"> Our goal is to build a context model for a spoken dialogue system, and we emphasise especially the system's robustness, i.e. its capability to produce reliable and meaningful responses in presence of various errors, disfluencies, unexpected input and out-of-domain utterances, etc. (which are especially notorious when dealing with spontaneous speech).</Paragraph> <Paragraph position="4"> The model is used to improve word recognition accuracy, and it should also provide a useful basis for other system modules.</Paragraph> <Paragraph position="5"> However, we do not aim at robustness on a merely mechanical level of matching correct words, but rather, on the level of maintaining the information content of the utterances. Despite the vagueness of such a term, we believe that speech act based context models are less robust due to the fact that the information content of the utterances is ignored.</Paragraph> <Paragraph position="6"> Consistency of the information exchanged in (taskoriented) conversations is one of the main sources for dialogue coherence, and so pertinent in the context management besides speech acts. Deviations from a predefined dialogue structure, multifunctionality of utterances, various side-sequences, disfluencies, etc.</Paragraph> <Paragraph position="7"> cannot be dealt with on a purely abstract level of illocution, but require knowledge of the domain, expressed in the semantic content of the utterances.</Paragraph> <Paragraph position="8"> ion, argumentation etc. have different communicative purposes which are reflected in the set of necessary speech acts. Moreover, in multilingual applications, like speech-to-speech translation systems, the semantic content of utterances plays an important role and an integrated system must also produce a semantic analysis of the input utterance. Although the goal may be a shallow understanding only, it is not enough that the system knows that the speaker uttered a &quot;request&quot;: the type of the request is also crucial.</Paragraph> <Paragraph position="9"> We thus reckon that appropriate context management should provide descriptions of what is said, and that the recognition of the utterance topic is an important task of spoken dialogue systems.</Paragraph> </Section> <Section position="5" start_page="631" end_page="634" type="metho"> <SectionTitle> 3 The Topic Model </SectionTitle> <Paragraph position="0"> In AI-based dialogue modelling, topics are associated with a particular discourse entity, focus, which is currently in the centre of attention and which the participants want to focus their actions on, e.g.</Paragraph> <Paragraph position="1"> Grosz and Sidner (1986). The topic (focus) is a means to describe thematically coherent discourse structure, and its use has been mainly supported by arguments regarding anaphora resolution and processing effort (search space limits). Our goal is to use topic information in predicting likely content of the next utterance, and thus we are more interested in the topic types that describe the information conveyed by utterances than the actual topic entity.</Paragraph> <Paragraph position="2"> Consequently, instead of tracing salient entities in the dialogue and providing heuristics for different shifts of attention, we seek a formalisation of the information structure of utterances in terms of the new information that is exchanged in the course of the dialogue.</Paragraph> <Paragraph position="3"> The purpose of our topic model is to assist speech processing, and so extensive and elaborated reasoning about plans and world knowledge is not available. Instead a model that relies on observed facts (= word tokens) and uses statistical information is preferred. We also expect the topic model to be general and extendable, so that if it is to be applied to a different domain, or more factors in the recognition of the information structure of the utterances 3 are to be taken into account, the model could easily adapt to these changes.</Paragraph> <Paragraph position="4"> The topic model consists of the following parts: 1. domain knowledge structured into a topic tree 2. prior probabilities of different topic shifts 3. topic vectors describing the mutual information between words and topic types 4. Predict-Support algorithm to measure similar null ity between the predicted topics and the topics supported by the input utterance.</Paragraph> <Paragraph position="5"> Below we describe each item in detail.</Paragraph> <Section position="1" start_page="632" end_page="633" type="sub_section"> <SectionTitle> 3.1 Topic trees </SectionTitle> <Paragraph position="0"> Originally &quot;focus trees&quot; were proposed by (McCoy and Cheng, 1991) to trace foci in NL generation systems. The branches of the tree describe what sort of shifts are cognitively easy to process and can be expected to occur in dialogues: random jumps from one branch to another are not very likely to occur, and if they do, they should be appropriately marked.</Paragraph> <Paragraph position="1"> The focus tree is a subgraph of the world knowledge, built in the course of the discourse on the basis of the utterances that have occurred. The tree both constrains and enables prediction of what is likely to be talked about next, and provides a top-down approach to dialogue coherence.</Paragraph> <Paragraph position="2"> Our topic tree is an organisation of the domain knowledge in terms of topic types, bearing resemblance to the topic tree of Carcagno and Iordanskaja (1993). The nodes of the tree 4 correspond to topic types which represent clusters of the words expected to occur at a particular point of the dialogue. Figure 1 shows a partial topic tree in a hotel reservation domain.</Paragraph> <Paragraph position="3"> For our experiments, topic trees were hand-coded from our dialogue corpus. Since this is time-consuming and subjective, an automatic clustering program, using the notion of a topic-binder, is currently under development.</Paragraph> <Paragraph position="4"> Our corpus contains 80 dialogues from the bilingual ATR Spoken Language Dialogue Database.</Paragraph> <Paragraph position="5"> 4We will continue talking about a topic tree, although in statistical modelling, the tree becomes a topic network where the shift probability between nodes which are not daughters or sisters of each other is close to zero.</Paragraph> <Paragraph position="6"> The dialogues deal with hotel reservation and tourist information, and the total number of utterances is 4228. (Segmentation is based on the information structure so that one utterance contains only one piece of new information.) The number of different word tokens is 27058, giving an average utterance length 6,4 words.</Paragraph> <Paragraph position="7"> The corpus is tagged with speech acts, using a surface pattern oriented speech act classification of Seligman et al. (1994), and with topic types. The topics are assigned to utterances on the basis of the new information carried by the utterance. New information (Clark and Haviland, 1977; Vallduvl and Engdahl, 1996) is the locus of information related to the sentential nuclear stress, and identified in regard to the previous context as the piece of information with which the context is updated after uttering the utterance. Often new information includes the verb and the following noun phrase.</Paragraph> <Paragraph position="8"> More than one third of the utterances (1747) contain short fixed phrases (Let me confirm; thank you; good.bye; ok; yes), and temporizers (well, ah, uhm).</Paragraph> <Paragraph position="9"> These utterances do not request or provide information about the domain, but control the dialogue in terms of time management requests or conventionalised dialogue acts (feedback-acknowledgements, thanks, greetings, closings, etc.) The special topic type IAM, is assigned to these utterances to signify their role in InterAction Management. The topic type MIX is reserved for utterances which contain information not directly related to the domain (safety of the downtown area, business taking longer than expected, a friend coming for a visit etc.), thus marking out-of-domain utterances. Typically these utterances give the reason for the request.</Paragraph> <Paragraph position="10"> The number of topic types in the corpus is 62.</Paragraph> <Paragraph position="11"> Given the small size of the corpus, this was considered too big to be used successfully in statistical calculations, and they were pruned on the basis of the topic tree: only the topmost nodes were taken into account and the subtopics merged into approproate mother topics. Figure 2 lists the pruned topic types and their frequencies in the corpus.</Paragraph> </Section> <Section position="2" start_page="633" end_page="633" type="sub_section"> <SectionTitle> 3.2 Topic shifts </SectionTitle> <Paragraph position="0"> On the basis of the tagged dialogue corpus, probabilities of different topic shifts were estimated. We used the Carnegie Mellon Statistical Language Modeling (CMU SLM) Toolkit, (Clarkson and Rosenfeld, 1997) to calculate probabilities. This builds a trigram backoff model where the conditional probablilities are calculated as follows:</Paragraph> <Paragraph position="2"/> </Section> <Section position="3" start_page="633" end_page="633" type="sub_section"> <SectionTitle> 3.3 Topic vectors </SectionTitle> <Paragraph position="0"> Each word type may support several topics. For instance, the occurrence of the word room in the utterance I'd like to make a room reservation, supports the topic MAKERESERVATION, but in the utterance We have only twin rooms available on the 15th. it supports the topic ROOM. To estimate how well the words support the different topic types, we measured mutual information between each word and the topic types. Mutual information describes how much information a word w gives about a topic type t, and is calculated as follows (ln is log base two, p(tlw ) the conditional probability of t given w, and p(t) the probability of t):</Paragraph> <Paragraph position="2"> p(w). p(t) p(t) If a word and a topic are negatively correlated, mutual information is negative: the word signals absence of the topic rather than supports its presence. Compared with a simple counting whether the word occurs with a topic or not, mutual information thus gives a sophisticated and intuitively appealing method for describing the interdependence between words and the different topic types.</Paragraph> <Paragraph position="3"> Each word is associated with a topic vector, which describes how much information the word w carries about each possible topic type ti: topvector( mi( w, t l ), mi( w, t 2 ), ..., mi( w, t, ) ) For instance, the topic vector of the word room is: topvector (room, \[mi (0. 21409750769169117, cont act ), mi (-5. 5258041314543815, iam), mi (-3. 831955835588453 ,meals ),</Paragraph> <Paragraph position="5"> mi (-2. 720924523199709, paym) , mi (0. 9687353561881407 ,res), mi (I. 9035899442740105, room), mi (-4.130179669884547, stay) \] ).</Paragraph> <Paragraph position="6"> The word supports the topics ROOM and MAKERESERVATION (res), but gives no information about MIX (out-of-domain) topics, and its presence is highly indicative that the utterance is not at least IAM or STAY. It also supports CONTACT because the corpus contains utterances like I'm in room 213 which give information about how to contact the customer who is staying at a hotel.</Paragraph> <Paragraph position="7"> The topic vectors are formed from the corpus. %Ve assume that the words are independently related to the topic types, although in the case of natural language utterances this may be too strong a constraint.</Paragraph> </Section> <Section position="4" start_page="633" end_page="634" type="sub_section"> <SectionTitle> 3.4 The Predict-Support Algorithm </SectionTitle> <Paragraph position="0"> Topics are assigned to utterances given the previous topic sequence (what has been talked about) and the words that carry new information (what is actu- null ally said). The Predict-Support Algorithm goes as follows: 1. Prediction: get the set of likely next topics in regard to the previous topic sequences using the topic shift model.</Paragraph> <Paragraph position="1"> 2. Support: link each Newlnfo word wj of the input to the possible topics types by retrieving its topic vector. For each topic type ti, add up the amounts of mutual information rni(wj;ti) by which it is supported by the words wj, and rank the topic types in the descending order of mutual information.</Paragraph> <Paragraph position="2"> 3. Selection: (a) Default: From the set of predicted topics, select the most supported topic as the current topic.</Paragraph> <Paragraph position="3"> (b) What-is-said heuristics: If the predicted topics do not include the supported topic, rely on what is said, and select the most supported topic as the current topic (cf.</Paragraph> <Paragraph position="4"> the Jumping Context approach in Qu et al. (1996)).</Paragraph> <Paragraph position="5"> (c) What-is-talked-about heuristics: If the words do not support any topic (e.g. all the words are unknown or out-of-domain), rely on what is predicted and select the most likely topic as the current topic.</Paragraph> <Paragraph position="6"> Using the probabilities obtained by the trigram backoff model, the set of likely topics is actually a set of all topic types ordered according to their likelihood. However, the original idea of the topic trees is to constrain topic shifts (transitions from a node to its daughters or sisters are favoured, while shifts to nodes in separate branches are less likely to occur unless the information under the current node is exhaustively discussed), and to maintain this restrictive property, we take into consideration only topics which have probability greater than an arbitrary limit p.</Paragraph> <Paragraph position="7"> Instead of having only one utterance analysed at the time and predicting its topic, a speech recognizer produces a word lattice, and the topic is to be selected among candidates for several word strings. We envisage the Predict-Support algorithm will work in the described way in these cases as well. However, an extra step must be added in the selection process: once the topics are decided for the n-best word strings in the lattice, the current topic is selected among the topic candidates as the highest supported topic. Consequently, the word string associated with the selected topic is then picked up as the current utterance.</Paragraph> <Paragraph position="8"> We must make two caveats for the performance of the algorithm, related to the sparse data problem in calculating mutual information. First, there is no difference between out-of-domain words and unknown but in-domain words: both are treated as providing no information about the topic types. If such words are rare, the algorithm works fine since the other words in the utterance usually support the correct topic. However, if such words occur ?requently, there is a difference in regard to whether the unknown words belong to the domain or not.</Paragraph> <Paragraph position="9"> Repeated out-of-domain words may signal a shift to a new topic: the speaker has simply jumped into a different domain. Since the out-of-domain words do not contribute to any expected topic type, the topic shift is not detected. On the other hand, if unknown but in-domain words are repeated, mutual information by which the topic types are supported is too coarse and fails to make necessary distinctions; hence, incorrect topics can be assigned. For instance, if lunch is an unknown word, the utterance Is lunch included? may get an incorrect topic type ROOMPRICE since this is supported by the other words of the utterance whose topic vectors were build on the basis of the training corpus examples like Is tax included? The other caveat is opposite to unknown words.</Paragraph> <Paragraph position="10"> If a word occurs in the corpus but only with a particular topic type, mutual information between the word and the topic becomes high, while it is zero with the other topics. This co-occurrence may just be an accidental fact due to a small training corpus, and the word can indeed occur with other topic types too. In these cases it is possible that the algorithm may go wrong: if none of the predicted topics of the utterance is supported by the words, we rely on the What-is-said heuristics and assign the highly supported but incorrect topic to the utterance. For instance, if included has occurred only with ROOM-PRICE, the utterance Is lunch included? may still get an incorrect topic, even though lunch is a known word: mutual information mi(included, RoomPrice) may be greater than mi(lunch, Meals).</Paragraph> </Section> </Section> class="xml-element"></Paper>