File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/98/p98-2171_metho.xml

Size: 15,117 bytes

Last Modified: 2025-10-06 14:15:01

<?xml version="1.0" standalone="yes"?>
<Paper uid="P98-2171">
  <Title>From Information Structure to Intonation: A Phonological Interface for Concept-to-Speech</Title>
  <Section position="3" start_page="0" end_page="1041" type="metho">
    <SectionTitle>
2 A Concept-to-Speech Generation
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="0" end_page="1041" type="sub_section">
      <SectionTitle>
System
</SectionTitle>
      <Paragraph position="0"> Our concept-to-speech generation system consists of a pipeline of modules (Fig. 1). A text  planning component produces sentence plans, which are fed into the tactical generator. The implementation basis for the tactical generator is the FUF (Elhadad 91) system.</Paragraph>
      <Paragraph position="1"> FUF is based on the theory of functional unification grammar and employs both phrase structure rules and unification of feature descriptions. Input is a partially specified feature description which constrains the utterance to be generated. Output is a fully specified feature description (in the sense of the particular grammar) subsumed by the input structure, which is then linearized to yield a sentence.</Paragraph>
      <Paragraph position="2"> The tactical generator has two layers. One is dealing with sentence level generation, producing a tree-like description of a sentence, the leaves of which are lemmata annotated with morphosyntactic and prosodic features. The second performs generation at the word level producing annotated phonological representations of the inflected word forms which are fed into the extended 2 two-level phonology component applying morphological and phonological rules to arrive at the representation used as input for speech synthesis.</Paragraph>
      <Paragraph position="3"> A distinguishing feature of the grammar used in the generator is the integration of sentence-level and word-level processing within the same formalism.</Paragraph>
      <Paragraph position="4">  This architecture forms an ideal platform for the implementation of the phonological interface. Necessary adaptions are limited to the data used: An existing grammar was extended with features describing the information structure. The lexicon consists of entries in phonemic form (using SAMPA notation) enriched with in- null same ratification machinery as the grammar.</Paragraph>
      <Paragraph position="5"> formation like (potential) accent and syllable boundary positions.</Paragraph>
      <Paragraph position="6"> Input to the synthesizer is a SAMPA string enriched with qualitative encodings of prosodic information (e.g., pitch accent, pauses, ...) produced by the two-level rules. Phonological specifications of intonation are processed by a phonetic interpreter (Pirker et al. 97) that transforms these qualitative labels into quantitative acoustic parameters. Although some interpretative work is done within the synthesizer, no linguistically motivated transformations are supposed to take place there. These all are performed within the two-level component.</Paragraph>
    </Section>
  </Section>
  <Section position="4" start_page="1041" end_page="1043" type="metho">
    <SectionTitle>
3 The Phonological Interface
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="1041" end_page="1042" type="sub_section">
      <SectionTitle>
3.1 Phenomena handled
</SectionTitle>
      <Paragraph position="0"> The phonological description in extended two-level morphology - in our case rather two-level phonology -serves ms the central interface where the modules for grammar processing and for speech synthesis meet and communicate.</Paragraph>
      <Paragraph position="1"> A fairly complex model of phonology is required in the system, also because the over-all objective of the project was to investigate whether and how conditions in the concept-to-speech task favour a more elaborate treatment of prosodic parameters in speech generation.</Paragraph>
      <Paragraph position="2"> The phonological description is implemented in the extended two-level framework described in section 2 and works over a lexicon of phonemic (rather than graphemic) representations of word stems and inflectional affixes. Morphotactic processing is thus restricted to inflection, whereas compounding and derivational affixation are encoded in the lexicon, which is typically small in domain-tailored concept-to-speech systems.</Paragraph>
      <Paragraph position="3"> Nevertheless, in segmental phonology, the component must compute morphonological rules in inflection as well ms post-lexical rules which interact with syllabification and cliticization. null To determine German syllabification and cliticization correctly, it is necessary to operate on structures larger than single words. Therefore phonological processing applies to chunks whose size depends on the one rule in the system that requires the largest phonological context to operate correctly. Because of the intonation rules discussed in section 4, phonological  processing applies to the whole utterance.</Paragraph>
      <Paragraph position="4"> The three phonological aspects segmental representation, syllabification, and word stress are mutually dependent in German phonology in all logically possible directions (Niklfeld et al. 95). The phonology component treats them in a unified description, which also covers the rare cases of word-internal and phrase-level stress shift in German. 3 While some segmental and supra-segmental rules in the phonological description depend on phonological context only, some others (like the rule for stress shifts as described above) depend on grammatical information on levels as high up as textual representation. For example, the German word for &amp;quot;weather&amp;quot; loses word stress in compounds when they appear in weatherreports (where the concept weather is &amp;quot;textually exophoric&amp;quot; (Benware 87)). Such phenomena are encoded in our extended two-level system by phonological rules which access the grammatical representation via feature-filters.</Paragraph>
      <Paragraph position="5"> There are few theoretical frameworks in computational linguistics for tackling such a breadth of phonological issues. Linguistically ambitious approaches are often designed with little regard to ease of use in large descriptions, whereas leaner formalisms do not scale well to complex data stretching across a number of phonological dimensions. The chosen framework of extended two-level phonology stands between these poles.</Paragraph>
    </Section>
    <Section position="2" start_page="1042" end_page="1042" type="sub_section">
      <SectionTitle>
3.2 Linearization of multi-tier
</SectionTitle>
      <Paragraph position="0"> phonological structures As the two-level framework assumes one lexical and one surface string only, we use a linear representation of our multidimensional phonological data, as follows: Each linear phonological string in the component stands for a multi-tier structure which combines a given number of separate dimensions of phonological structure. The tier of phonological segments (members of the German SAMPA &amp;quot;,~') &amp;quot;s used to provide the backbone of skeletal points on which all units of the representation are linked together. Each unit on any phonological tier has scope over/has ms its domain a continuous section of skeleton points. For each 3Otherwise, German has lexically specified word stress.</Paragraph>
      <Paragraph position="1"> tier, a convention is provided which designates that part of each domain that is used for the linking. For some supra-segmental tiers (syllables, phonological words) the leftmost unit of the scope domain ms computed by the respective rule is used for this purpose. For other tiers the domain edges are unspecified in the lexicon (stresses and accents, which have scope over stretches of syllables), and therefore other well-defined parts of the scope domain are used for the linking (such as the vocalic nucleus of a syllable). Where it appears natural to do so, units on certain phonological tiers are also linked to right domain edges (ms is the case with phrase and boundary tone markers, which have scope over any phonological material between a nuclear tone and the right boundary of an intonation phrase.) While these representations clearly encode some fragment of atltosegmental phonology in an implicit way, they do not allow for the attachment of more than one suprasegmental unit from the same tier to a single segmental unit.</Paragraph>
      <Paragraph position="2"> Such power was not needed in our application.</Paragraph>
      <Paragraph position="3"> The representation allowed for easy incremental extensions to our descriptions, as additional tiers of representation were added ms the coverage of higher-level prosodic issues such as sentence intonation was extended.</Paragraph>
    </Section>
    <Section position="3" start_page="1042" end_page="1043" type="sub_section">
      <SectionTitle>
3.3 Implementational notes
</SectionTitle>
      <Paragraph position="0"> Using the linearized representation, the well-known processing schemes for two-level morphology can be applied directly. Contemporary compilers for two-level morphology allow to specify sets of symbols that are ignored in individual rules. Extensive application of such syntactic sugar enables us to keel) the rule formulations over the collapsed representation economical and relatively transparent. We note in passing that although collapsing multilinear data-structures onto a single tier increases the likeliness of combinatorial explosion in processing when using the two-level automata as transducers, it turns out that in our already quite complex description this does not become a real problem.</Paragraph>
      <Paragraph position="1"> In earlier publications, we described how we implement phonological generalizations that stretch across phonological dimensions (Niklfeld et al. 95), and we proposed implementations of suprasegmental issues such ms stress shift and  the projection of pitch accents depending on focus information (Niklfeld &amp; Alter 96). We have also discussed time structure (Alter et al. 96).</Paragraph>
      <Paragraph position="2"> In section 4 we go beyond this to show that intonation in German ha~s properties that are best implemented by combining our two-level phonological description, which is well-suited to express constraints on linear contexts, with the power of a unification-based feature grammar.</Paragraph>
    </Section>
  </Section>
  <Section position="5" start_page="1043" end_page="1044" type="metho">
    <SectionTitle>
4 Dealing with Intonation
</SectionTitle>
    <Paragraph position="0"> This section describes the novel approach of using the extended two-level component for specifying &amp;quot;appropriate&amp;quot; intonation and phrasing.</Paragraph>
    <Section position="1" start_page="1043" end_page="1043" type="sub_section">
      <SectionTitle>
4.1 Different perspectives
</SectionTitle>
      <Paragraph position="0"> The diversity of factors that influences intonation is mirrored in the variety of research that deals with intonation: Phonologists and phoneticians are concerned with the inspection of the form of intonation contours, while on the other hand there is a strong tradition in the field of syntax (keyword: focus projection) and semantics/pragmatics (keyword: given vs. new information) that merely deal with the problem of accent location, neglecting its form.</Paragraph>
      <Paragraph position="1"> Another strand of research deals with the coupling of information structure and phonology, i.e., the tight association of meanings and tunes such as in (Prevost &amp; Steedman 94) where the classification of the utterance's elements along the dimensions theme/rheme and focus/ground unambiguously triggers the selection of tones.</Paragraph>
      <Paragraph position="2"> In the field of text-to-speech synthesis, at last, intonation most often is handled by using algorithms and heuristics that intermingle information on syntax, punctuation, word-class information etc. in a rather unstructured way.</Paragraph>
    </Section>
    <Section position="2" start_page="1043" end_page="1044" type="sub_section">
      <SectionTitle>
4.2 Our design
</SectionTitle>
      <Paragraph position="0"> In our system a strict separation of levels is employed: only the two-level coml)onent deals with tonal specifications. Within the tactical generator only candidate positions for both pitch accents and phrasal boundaries are selected.</Paragraph>
      <Paragraph position="1"> This reflects the fact that though prosody heavily depends on grammaticM and pragmatic factors, its realization is also strongly influenced by phonological and phonetic constraints which are much more &amp;quot;naturally&amp;quot; handled by the two-level component. In the terminology of two-level morphology the grammar provides a underspecified lexical representation from which the concrete surface form is derived. In the lexicon every (accentable) word contains an abstract pitch tone (T) within its phonemic representation. The &amp;quot;lexical boundaries&amp;quot; (B), i.e., candidates for boundaries between intonational phra~ses (IP), are inserted by the generator in between words and these T and B are then mapped to GToBI labels (German Tones and Break Indices- (Grice et al. 96)) or discarded i.e., mapped it to surface 0.</Paragraph>
      <Paragraph position="2"> The following example (in pseudo-code) defines a basic condition on the IP: it contains at least one, at most three pitch accents, and has an obligatory boundary tone.</Paragraph>
      <Paragraph position="4"> In order to determine the realization of a T the grammatical information the generator provided for the word in question is inspected via the filter mechanism: E.g. if a words was marked a~s unaccented (acc -) the tone will be discarded or the selection of boundary tones is triggered by the sentence type (L-L7, in the case of a~ssertions): T:O &lt;= _ filter:(head (phon (acc -))); B:L-LY. &lt;=&gt; _ filter: (head (s-type assert)); While the rules discussed so far have been pure filter applications the last rule encodes a constraint on phonological context:  designate syllable boundaries) The rationale behind this rule is, that we want to avoid the contours shown in figure 2 when realizing IP boundaries. The L-HT, boundary basically designates a fall-rise contour which shoukl  be a felicitous if the last pitch accent before the boundary was a falling one. The second term states, that after a rising pitch accent the same boundary contour is to be produced only if the pitch peak is followed by two or more unaccented syllables thus ensuring that there is &amp;quot;enough time&amp;quot; to produce the fall-rise. At the same time the production of the concurring H-LT, is blocked, which would produce a long monotonous stretch on a high level, that might be perceived as unnatural.</Paragraph>
      <Paragraph position="5"> The rules thus also implement some of the variability in prosody that is due to the interaction of phrasing and pitch accents much in the spirit of tone-linking (Gussenhoven 84).</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML