File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/relat/98/p98-2165_relat.xml
Size: 1,823 bytes
Last Modified: 2025-10-06 14:16:11
<?xml version="1.0" standalone="yes"?> <Paper uid="P98-2165"> <Title>formance Computing and Communications in Healthcare (funded by the New York state Science and Technology Foundation under Grant</Title> <Section position="4" start_page="1003" end_page="1003" type="relat"> <SectionTitle> 3 Related Work </SectionTitle> <Paragraph position="0"> Previous work on intonation modeling primarily focused on TTS applications. For example, in (Bachenko and Fitzpatrick, 1990), a set of hand-crafted rules are used to determine discourse neutral prosodic phrasing, achieving an accuracy of approximately 85%. Recently, researchers improved on manual development of rules by acquiring prosodic phrasing rules with machine learning tools. In (Wang and Hirschberg, 1992), Classification And Regression Tree (CART) (Brieman et al., 1984) was used to produce a decision tree to predict the location of prosodic phrase boundaries, yielding a high accuracy, around 90%. Similar methods were also employed in predicting pitch accent for TTS in (Hirschberg, 1993). Hirschberg exploited various features derived from text analysis, such as part of speech tags, information status (i.g. given/new, contrast), and cue phrases; both hand-crafted and automatically learned rules achieved 80-98% success depending on the type of speech corpus. Until recently, there has been only limited effort on modeling intonation for CTS (Davis and Hirschberg, 1988; Young and Fallside, 1979; Prevost, 1995). Many CTS systems were simplified as text generation followed by TTS. Others that do integrate generation make use of the structural information provided by the NLG component (Prevost, 1995).</Paragraph> <Paragraph position="1"> However, most previous CTS systems are not based on large scale general NLG systems.</Paragraph> </Section> class="xml-element"></Paper>