File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/82/j82-3001_abstr.xml
Size: 3,698 bytes
Last Modified: 2025-10-06 13:46:03
<?xml version="1.0" standalone="yes"?> <Paper uid="J82-3001"> <Title>Computational Complexity and LexicaI-Functional Grammar</Title> <Section position="1" start_page="0" end_page="0" type="abstr"> <SectionTitle> 1. Introduction </SectionTitle> <Paragraph position="0"> An important goal of modern linguistic theory is to characterize as narrowly as possible the class of natural languages. One classical approach to this characterization has been to investigate the generative capacity of grammatical systems specifiable within particular linguistic theories. Formal results along these lines have already been obtained for certain kinds of Transformational Generative Grammars: for example, Peters and Ritchie 1973a showed that the theory of Transformational Grammar presented in Chomsky's Aspects of the Theory of Syntax 1965 is powerful enough to allow the specification of grammars for generating any recursively enumerable language, while Rounds 1973,1975 extended this work by demonstrating that moderately restricted Transformational Grammars (TGs) can generate languages whose recognition time is provably exponential. 1 These moderately restricted theories of Transformational Grammar generate languages whose recognition is widely considered to be computationally intractable. Whether this &quot;worst case&quot; complexity analysis has any real import for actual linguistic study has been the subject of some debate (for discussion, see Chomsky 1980, Berwick and Weinberg 1982). Resuits on generative capacity provide only a worst-case bound on the computational resources required to l In Rounds's proof, transformations are subject to a &quot;terminal length non-decreasing&quot; condition, as suggested by Peters and Myhill (cited in Rounds 1975). A similar &quot;terminal length increasing&quot; constraint (to the author's knowledge first proposed by Petrick 1965) when coupled with a condition on recoverability of deletions, yields languages that are recursive but not necessary recognizable in exponential time.</Paragraph> <Paragraph position="1"> 2 Usually, the recognition procedures presented actually recover the structural description of sentences in the process of recognition, so that in fact they actually parse sentences, rather than simply recognize them.</Paragraph> <Paragraph position="2"> recognize the sentences specified by a linguistic theory. 2 But a sentence processor might not have to explicitly reconstruct deep structures in an exact (but inverse) mimicry of a transformation derivation, or even recognize every sentence generable by a particular transformational theory. For example, as suggested by Fodor, Bever and Garrett 1974, the human sentence processor could simply obey a set of heuristic principles and recover the right representations specified by a linguistic theory, but not according to the rules of that theory. To say this much is to simply restate a long-standing view that a theory of linguistic performance could well differ from a theory of linguistic competence - and that the relation between the two could vary from one of near isomorphism to the much weaker input/output equivalence implied by the Fodor, Bever, and Garrett position. 3 In short, the study of generative capacity furnishes a mathematical characterization of the computational complexity of a linguistic system. Whether this mathematical characterization is cognitively relevant is a related, but distinct, question. Still, the determination of the computational complexity of a linguistic system is an important undertaking. For one thing, it gives a precise description of the class of languages that the</Paragraph> </Section> class="xml-element"></Paper>