File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/96/p96-1025_intro.xml
Size: 1,437 bytes
Last Modified: 2025-10-06 14:06:11
<?xml version="1.0" standalone="yes"?> <Paper uid="P96-1025"> <Title>A New Statistical Parser Based on Bigram Lexical Dependencies</Title> <Section position="3" start_page="0" end_page="0" type="intro"> <SectionTitle> 1 Introduction </SectionTitle> <Paragraph position="0"> Lexical information has been shown to be crucial for many parsing decisions, such as prepositional-phrase attachment (for example (Hindle and Rooth 93)).</Paragraph> <Paragraph position="1"> However, early approaches to probabilistic parsing (Pereira and Schabes 92; Magerman and Marcus 91; Briscoe and Carroll 93) conditioned probabilities on non-terminal labels and part of speech tags alone.</Paragraph> <Paragraph position="2"> The SPATTER parser (Magerman 95; 3elinek et ah 94) does use lexical information, and recovers labeled constituents in Wall Street Journal text with above 84% accuracy - as far as we know the best published results on this task.</Paragraph> <Paragraph position="3"> This paper describes a new parser which is much simpler than SPATTER, yet performs at least as well when trained and tested on the same Wall Street Journal data. The method uses lexical information directly by modeling head-modifier 1 relations between pairs of words. In this way it is similar to an argument or adjunct.</Paragraph> <Paragraph position="4"> Link grammars (Lafferty et al. 92), and dependency grammars in general.</Paragraph> </Section> class="xml-element"></Paper>