File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/04/w04-2325_concl.xml

Size: 2,852 bytes

Last Modified: 2025-10-06 13:54:25

<?xml version="1.0" standalone="yes"?>
<Paper uid="W04-2325">
  <Title>Causes and Strategies for Requesting Clarification in Dialogue</Title>
  <Section position="6" start_page="0" end_page="0" type="concl">
    <SectionTitle>
5 Conclusions and Further Work
</SectionTitle>
    <Paragraph position="0"> We have presented a model of causes for requesting clarifications in dialogues. We classified these causes--BR1 If two (or more) hypotheses are bridged via 'next' to same antecedent, closer is better.</Paragraph>
    <Paragraph position="1"> BR3 Tomorrow should be referred to as 'tomorrow'. null RR1 If there are hypotheses where Plan-Corr has been inferred (non-monotonically) and some where other relations have been inferred, prefer these other hypotheses.</Paragraph>
    <Paragraph position="2">  understanding problems in the widest sense--according to the level of processing on which they arise, and according to the severity of the problem. To make this precise, we related the multi-level models of communication of (Clark, 1996) and (Allwood, 1995) to the discourse semantics theory SDRT (Asher and Lascarides, 2003), and arrived at a fine-grained model of different understanding tasks which was motivated by analysing examples of CRs. We then proposed to extend the notion of confidence score from speech recognition to other kinds of processing (semantic and pragmatic), and sketched an implementation of this idea. We think that the resulting, relatively natural clarification behaviour shows that this idea of using 'pragmatic confidences' is promising.</Paragraph>
    <Paragraph position="3"> However, the initial results also suggest that there is a lot of further work to be done. Firstly, it turned out during development of the system that setting the thresholds in the system manually in such a way that the desired behaviour was produced was rather hard (besides being ad hoc). We are currently exploring techniques to automatically learn the best settings from a corpus (this could perhaps be done along the lines of (Walker et al., 2000)). Secondly, the system we extended, RUDI, makes rather high demands on the quality of the data, being a system that relies on 'deep processing' at all stages. We are currently exploring ways of implementing the idea of using confidence values throughout in 'simpler', more realistic dialogue systems. This is a precondition for a thorough evaluation of the proposed clarification strategy, using 'real-world' criteria like user satisfaction and dialogue duration until task-completion.</Paragraph>
    <Paragraph position="4"> With regard to the theoretical analysis of CRs, we are currently testing the coverage and accuracy of the model in a corpus study, and we are also working on a proper formalisation of the different classes of CR we proposed, in the framework of SDRT.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML