File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/relat/01/n01-1003_relat.xml
Size: 2,802 bytes
Last Modified: 2025-10-06 14:15:39
<?xml version="1.0" standalone="yes"?> <Paper uid="N01-1003"> <Title>SPoT: A Trainable Sentence Planner</Title> <Section position="9" start_page="21" end_page="21" type="relat"> <SectionTitle> 6 Related Work </SectionTitle> <Paragraph position="0"> Previous work in sentence planning in the natural language generation (NLG) community uses hand-written rules to approximate the distribution of linguistic phenomena in a corpus (see (Shaw, 1998) for a recent example with further references). This approach is difficult to scale due to the nonrobustness of rules and unexpected interactions (Hovy and Wanner, 1996), and it is difficult to develop new applications quickly. Presumably, this is the reason why dialog systems to date have not used this kind of sentence planning.</Paragraph> <Paragraph position="1"> Most dialog systems today use template-based generation. The template outputs are typically concatenated to produce a turn realizing all the communicative goals. It is hard to achieve high quality output by concatenating the template-based output for individual communicative goals, and templates are difficult to develop and maintain for a mixed-initiative dialog system. For these reasons, Oh and Rudnicky (2000) use a101 -gram models and Ratnaparkhi (2000), maximum entropy to choose templates, using hand-written rules to score different candidates.</Paragraph> <Paragraph position="2"> But syntactically simplistic approaches may have quality problems, and more importantly, these approaches only deal with inform speech acts. And crucially, these approaches suffer from the need for training data. In general there may be no corpus available for a new application area, or if there is a corpus available, it is a transcript of human-human dialogs. Human-human dialogs, however, may not provide a very good model of sentence planning strategies for a computational system because the sentence planner must plan communicative goals such as implicit confirmation which are needed to prevent and correct errors in automatic speech recognition but which are rare in human-human dialog.</Paragraph> <Paragraph position="3"> Other related work deals with discourse-related aspects of sentence planning such as cue word placement (Moser and Moore, 1995), clearly a crucial task whose integration into our approach we leave to future work.</Paragraph> <Paragraph position="4"> Mellish et al. (1998) investigate the problem of determining a discourse tree for a set of elementary speech acts which are partially constrained by rhetorical relations. Using hand-crafted evaluation metrics, they show that a genetic algorithm achieves good results in finding discourse trees. However, they do not address clausecombining, and we do not use hand-crafted metrics.</Paragraph> </Section> class="xml-element"></Paper>