File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/90/p90-1030_intro.xml

Size: 2,982 bytes

Last Modified: 2025-10-06 14:05:01

<?xml version="1.0" standalone="yes"?>
<Paper uid="P90-1030">
  <Title>Computational structure of generative phonology and its relation to language comprehension.</Title>
  <Section position="3" start_page="0" end_page="235" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> Generative linguistic theory and human language comprehension may both be thought of as computations. The goal of language comprehension is to construct structural descriptions of linguistic sensations, while the goal of generative theory is to enumerate all and only the possible (grammatical) structural descriptions. These computations are only indirectly related. For one, the input to the two computations is not the same. As we shall see below, the most we might say is that generative theory provides an extensional chatacterlsation of language comprehension, which is a function from surface forms to complete representations, including underlying forms. The goal of this article is to reveal exactly what generative linguistic theory says about language comprehension in the domain of phonology.</Paragraph>
    <Paragraph position="1"> The article is organized as follows. In the next section, we provide a brief overview of the computational structure of generative phonology. In section 3, we introduce the segmental model of phonology, discuss its computational complexity, and prove that even restricted segmental models are extremely powerful (undecidable). Subsequently, we consider various proposed and plausible restrictions on the model, and conclude that even the maximally restricted segmental model is likely to be intractable. The fourth section in-.</Paragraph>
    <Paragraph position="2"> troduces the modern autosegmental (nonlinear) model and discusses its computational complexity.</Paragraph>
    <Paragraph position="3"> &amp;quot;The author is supported by a IBM graduate fellowship and eternally indebted to Morris Halle and Michael Kenstowicz for teaching him phonology. Thanks to Noam Chomsky, Sandiway Fong, and Michael Kashket for their comments and assistance.</Paragraph>
    <Paragraph position="4">  We prove that the natural problem of constructing an autosegmental representation of an under-specified surface form is NP-hard. The article concludes by arguing that the complexity proofs are unnatural despite being true of the phonological models, because the formalism of generative phonology is itself unnatural.</Paragraph>
    <Paragraph position="5"> The central contributions of this article ate: (i) to explicate the relation between generative theory and language processing, and argue that generative theories are not models of language users primarily because they do not consider the inputs naturally available to language users; and (ii) to analyze the computational complexity of generative phonological theory, as it has developed over the past twenty years, including segmental and autosegmental models.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML