File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/04/w04-0305_intro.xml
Size: 3,026 bytes
Last Modified: 2025-10-06 14:02:28
<?xml version="1.0" standalone="yes"?> <Paper uid="W04-0305"> <Title>Lookahead in Deterministic Left-Corner Parsing</Title> <Section position="2" start_page="0" end_page="0" type="intro"> <SectionTitle> 1 Introduction </SectionTitle> <Paragraph position="0"> Incremental interpretation is a fundamental property of the human parsing mechanism. To support incremental interpretation, any model of sentence processing must not only process the sentence incrementally, it must to some degree restrict the number of analyses which it produces for any sentence pre x. Otherwise the ambiguity of natural language would make the number of possible interpretations at any point in the parse completely overwhelming. Deter-This work has been supported by the Department of ministic parsing takes the extreme position that there can only be one analysis for any sentence pre x. We investigate methods which make such a strong constraint feasible, in particular the use of lookahead.</Paragraph> <Paragraph position="1"> In this paper we do not try to construct a single deterministic parser, but instead consider a family of deterministic parsers and empirically measure the optimal performance of a deterministic parser in this family. As has been previously proposed by Brants and Crocker (2000), we take a corpus-based approach to this empirical investigation, using a previously de ned statistical parser (Henderson, 2003). The statistical parser uses an incremental history-based probability model based on left-corner parsing, and the parameters of this model are estimated using a neural network. Performance of this basic model is state-of-the-art, making these results likely to generalize beyond this speci c system. null We specify the family of deterministic parsers in terms of pruning the search for the most probable parse. Both deterministic parsing and the use of k-word lookahead are characterized as constraints on pruning this search. We then derive the optimal pruning strategy given these constraints and the probabilities provided by the statistical parser's left-corner probability model. Empirical experiments on the accuracy of a parser which uses this pruning method indicate the best accuracy we could expect from a deterministic parser of this kind. This allows us to compare di erent deterministic parsing methods, in particular the use of di erent amounts of lookahead.</Paragraph> <Paragraph position="2"> In the remainder of this paper, we rst discuss how the principles of deterministic parsing can be expressed in terms of constraints on the search strategy used by a statistical parser. We then present the probability model used by the statistical parser, the way a neural network is used to estimate the parameters of this probability model, and the methods used to search for the most probable parse according these parameters. Finally, we present the empirical experiments on deterministic parsing with lookahead, and discuss the implications of these results.</Paragraph> </Section> class="xml-element"></Paper>