File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/06/w06-1404_concl.xml

Size: 2,592 bytes

Last Modified: 2025-10-06 13:55:35

<?xml version="1.0" standalone="yes"?>
<Paper uid="W06-1404">
  <Title>Overgeneration and ranking for spoken dialogue systems</Title>
  <Section position="6" start_page="21" end_page="21" type="concl">
    <SectionTitle>
4 Discussion &amp; Conclusions
</SectionTitle>
    <Paragraph position="0"> Where NLG affects the dialogue system: Discourse entities introduced by NLG add items to the system's salience list as an equal partner to NLU.</Paragraph>
    <Paragraph position="1"> Robustness: due to imperfect ASR and NLU, we relax completeness requirements when doing overgeneration, and reason about the generation input by adding defaults for missing constraints, checking ranges of attribute values etc. Moreover, we use a template generator as a fall-back if NLG fails to at least give some feedback to the user (s6 in table 1).</Paragraph>
    <Paragraph position="2"> What-to-say vs how-to-say-it: the classic separation of NLG into separate modules also holds in our dialogue system, albeit with some modifications: 'content determination' is ultimately performed by the user and the constraint optimizer. The presentation dialogue moves do microplanning, for example by deciding to present retrieved database items either as examples (s4 in table 1) or as part of a larger answer list of items. The chart generator performs realization.</Paragraph>
    <Paragraph position="3"> In sum, flexible and expressive NLG is crucial for the robustness of the entire speech-based dialogue system by verbalizing what the system understood and what actions it performed as a consequence of this understanding. We find that overgeneration and ranking techniques allow us to model alignment and variation even in situations where no corpus data is available by using the discourse history as a 'corpus'.</Paragraph>
    <Paragraph position="4"> Acknowledgments This work is supported by the US government's NIST Advanced Technology Program.</Paragraph>
    <Paragraph position="5"> Collaborating partners are CSLI, Robert Bosch Corporation, VW America, and SRI International. We thank the many people involved in this project, in particular Fuliang Weng and Heather Pon-Barry for developing the content optimization module; Annie Lien, Badri Raghunathan, Brian Lathrop, Fuliang Weng, Heather Pon-Barry, Jeff Russell, and Tobias Scheideck for performing the evaluations and compiling the results; Matthew Purver and Florin Ratiu for work on the CSLI dialogue manager. The content optimizer, knowledge manager, and the NLU module have been developed by the Bosch Research and Technology Center.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML