File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/99/p99-1054_metho.xml

Size: 17,873 bytes

Last Modified: 2025-10-06 14:15:27

<?xml version="1.0" standalone="yes"?>
<Paper uid="P99-1054">
  <Title>Efficient probabilistic top-down and left-corner parsingt</Title>
  <Section position="3" start_page="421" end_page="422" type="metho">
    <SectionTitle>
2 Parser architecture
</SectionTitle>
    <Paragraph position="0"> The parser proceeds incrementally from left to right, with one item of look-ahead. Nodes are expanded in a standard top-down, left-to-right fashion. The parser utilizes: (i) a probabilistic context-free grammar (PCFG), induced via standard relative frequency estimation from a corpus of parse trees; and (ii) look-ahead probabilities as described below. Multiple competing partial parses (or analyses) are held on a priority queue, which we will call the pending heap. They are ranked by a figure of merit (FOM), which will be discussed below. Each analysis has its own stack of nodes to be expanded, as well as a history, probability, and FOM. The highest ranked analysis is popped from the pending heap, and the category at the top of its stack is expanded. A category is expanded using every rule which could eventually reach the look-ahead terminal. For every such rule expansion, a new analysis is created 1 and pushed back onto the pending heap.</Paragraph>
    <Paragraph position="1"> The FOM for an analysis is the product of the probabilities of all PCFG rules used in its derivation and what we call its look-ahead probability (LAP). The LAP approximates the product of the probabilities of the rules that will be required to link the analysis in its current state with the look-ahead terminal 2. That is, for a grammar G, a stack state \[C1 ... C,\] and a look-ahead terminal item w: (1) LAP --- PG(\[C1. . . Cn\] -~ wa) We recursively estimate this with two empirically observed conditional probabilities for every non-terminal Ci on the stack: /~(Ci 2+ w) and/~(Ci -~ e). The LAP approximation for a given stack state and look-ahead terminal is: (2) PG(\[Ci . .. Ca\] wot) P(Ci w) + When the topmost stack category of an analysis matches the look-ahead terminal, the terminal is popped from the stack and the analysis 1We count each of these as a parser state (or rule expansion) considered, which can be used as a measure of efficiency.</Paragraph>
    <Paragraph position="2"> 2Since this is a non-lexicalized grammar, we are taking pre-terminal POS markers as our terminal items. is pushed onto a second priority queue, which we will call the success heap. Once there are &amp;quot;enough&amp;quot; analyses on the success heap, all those remaining on the pending heap are discarded.</Paragraph>
    <Paragraph position="3"> The success heap then becomes the pending heap, and the look-ahead is moved forward to the next item in the input string. When the end of the input string is reached, the analysis with the highest probability and an empty stack is returned as the parse. If no such parse is found, an error is returned.</Paragraph>
    <Paragraph position="4"> The specifics of the beam-search dictate how many analyses on the success heap constitute &amp;quot;enough&amp;quot;. One approach is to set a constant beam width, e.g. 10,000 analyses on the success heap, at which point the parser moves to the next item in the input. A problem with this approach is that parses towards the bottom of the success heap may be so unlikely relative to those at the top that they have little or no chance of becoming the most likely parse at the end of the day, causing wasted effort. An alternative approach is to dynamically vary the beam width by stipulating a factor, say 10 -5, and proceed until the best analysis on the pending heap has an FOM less than 10 -5 times the probability of the best analysis on the success heap. Sometimes, however, the number of analyses that fall within such a range can be enormous, creating nearly as large of a processing burden as the first approach. As a compromise between these two approaches, we stipulated a base beam factor a (usually 10-4), and the actual beam factor used was a */~, where/3 is the number of analyses on the success heap. Thus, when f~ is small, the beam stays relatively wide, to include as many analyses as possible; but as /3 grows, the beam narrows. We found this to be a simple and successful compromise.</Paragraph>
    <Paragraph position="5"> Of course, with a left recursive grammar, such a top-down parser may never terminate. If no analysis ever makes it to the success heap, then, however one defines the beam-search, a top-down depth-first search with a left-recursive grammar will never terminate. To avoid this, one must place an upper bound on the number of analyses allowed to be pushed onto the pending heap. If that bound is exceeded, the parse fails. With a left-corner strategy, which is not prey to left recursion, no such upper bound is necessary.</Paragraph>
  </Section>
  <Section position="4" start_page="422" end_page="425" type="metho">
    <SectionTitle>
3 Grammar transforms
</SectionTitle>
    <Paragraph position="0"> Nijholt (1980) characterized parsing strategies in terms of announce points: the point at which a parent category is announced (identified) relative to its children, and the point at which the rule expanding the parent is identified. In pure top-down parsing, a parent category and the rule expanding it are announced before any of its children. In pure bottom-up parsing, they are identified after all of the children. Grammar transforms are one method for changing the announce points. In top-down parsing with an appropriately binaxized grammar, the paxent is identified before, but the rule expanding the parent after, all of the children. Left-corner parsers announce a parent category and its expanding rule after its leftmost child has been completed, but before any of the other children.</Paragraph>
    <Section position="1" start_page="422" end_page="423" type="sub_section">
      <SectionTitle>
3.1 Delaying rule identification through
</SectionTitle>
      <Paragraph position="0"> binarization Suppose that the category on the top of the stack is an NP and there is a determiner (DT) in the look-ahead. In such a situation, there is no information to distinguish between the rules NP ~ DT JJ NN andNP--+DT JJ NNS.</Paragraph>
      <Paragraph position="1"> If the decision can be delayed, however, until such a time as the relevant pre-terminal is in the look-ahead, the parser can make a more informed decision. Grammar binaxization is one way to do this, by allowing the parser to use a rule like NP --+ DT NP-DT, where the new non-terminal NP-DT can expand into anything that follows a DT in an NP. The expansion of NP-DT occurs only after the next pre-terminal is in the look-ahead. Such a delay is essential for an efficient implementation of the kind of incremental parser that we are proposing.</Paragraph>
      <Paragraph position="2"> There axe actually several ways to make a grammar binary, some of which are better than others for our parser. The first distinction that can be drawn is between what we will call left binaxization (LB) versus right binaxization (RB, see figure 1). In the former, the leftmost items on the righthand-side of each rule are grouped together; in the latter, the rightmost items on the righthand-side of the rule are grouped together. Notice that, for a top-down, left-to-right parser, RB is the appropriate transform, because it underspecifies the right siblings. With LB, a top-down parser must identify all of the siblings before reaching the leftmost item, which does not aid our purposes.</Paragraph>
      <Paragraph position="3"> Within RB transforms, however, there is some variation, with respect to how long rule under-specification is maintained. One method is to have the final underspecified category rewrite as a binary rule (hereafter RB2, see figure lb). Another is to have the final underspecified category rewrite as a unary rule (RB1, figure lc). The last is to have the final underspecified category rewrite as a nullaxy rule (RB0, figure ld). Notice that the original motivation for RB, to delay specification until the relevant items are present in the look-ahead, is not served by RB2, because the second child must be specified without being present in the look-ahead. RB0 pushes the look-ahead out to the first item in the string after the constituent being expanded, which can be useful in deciding between rules of unequal length, e.g. NP---+ DT NN and NP ~ DT NN NN.</Paragraph>
      <Paragraph position="4"> Table 1 summarizes some trials demonstrat- null ing the effect of different binarization approaches on parser performance. The grammars were induced from sections 2-21 of the Penn Wall St. Journal Treebank (Marcus et al., 1993), and tested on section 23. For each transform tested, every tree in the training corpus was transformed before grammar induction, resulting in a transformed PCFG and look-ahead probabilities estimated in the standard way. Each parse returned by the parser was detransformed for evaluation 3. The parser used in each trial was identical, with a base beam factor c~ = 10 -4. The performance is evaluated using these measures: (i) the percentage of candidate sentences for which a parse was found (coverage); (ii) the average number of states (i.e. rule expansions) considered per candidate sentence (efficiency); and (iii) the average labelled precision and recall of those sentences for which a parse was found (accuracy). We also used the same grammars with an exhaustive, bottom-up CKY parser, to ascertain both the accuracy and probability of the maximum likelihood parse (MLP). We can then additionally compare the parser's performance to the MLP's on those same sentences.</Paragraph>
      <Paragraph position="5"> As expected, left binarization conferred no benefit to our parser. Right binarization, in contrast, improved performance across the board.</Paragraph>
      <Paragraph position="6"> RB0 provided a substantial improvement in coverage and accuracy over RB1, with something of a decrease in efficiency. This efficiency hit is partly attributable to the fact that the same tree has more nodes with RB0. Indeed, the efficiency improvement with right binarization over the standard grammar is even more interesting in light of the great increase in the size of the grammars.</Paragraph>
      <Paragraph position="7"> 3See Johnson (1998) for details of the transform/detransform paradigm.</Paragraph>
      <Paragraph position="8"> It is worth noting at this point that, with the RB0 grammar, this parser is now a viable broad-coverage statistical parser, with good coverage, accuracy, and efficiency 4. Next we considered the left-corner parsing strategy.</Paragraph>
    </Section>
    <Section position="2" start_page="423" end_page="424" type="sub_section">
      <SectionTitle>
3.2 Left-corner parsing
Left-corner (LC) parsing (Rosenkrantz and
</SectionTitle>
      <Paragraph position="0"> Lewis II, 1970) is a well-known strategy that uses both bottom-up evidence (from the left corner of a rule) and top-down prediction (of the rest of the rule). Rosenkrantz and Lewis showed how to transform a context-free grammar into a grammar that, when used by a top-down parser, follows the same search path as an LC parser. These LC grammars allow us to use exactly the same predictive parser to evaluate top-down versus LC parsing. Naturally, an LC grammar performs best with our parser when right binarized, for the same reasons outlined above. We use transform composition to apply first one transform, then another to the output of the first. We denote this A o B where (A o B) (t) = B (A (t)). After applying the left-corner transform, we then binarize the resulting grammar 5, i.e. LC o RB.</Paragraph>
      <Paragraph position="1"> Another probabilistic LC parser investigated (Manning and Carpenter, 1997), which utilized an LC parsing architecture (not a transformed grammar), also got a performance boost  tailed in Charniak et al. (1998) measured efficiency in terms of total edges popped. An edge (or, in our case, a parser state) is considered when a probability is calculated for it, and we felt that this was a better efficiency measure than simply those popped. As a baseline, their parser considered an average of 2216 edges per sentence in section 22 of the WSJ corpus (p.c.).</Paragraph>
      <Paragraph position="2"> 5Given that the LC transform involves nullary productions, the use of RB0 is not needed, i.e. nullary productions need only be introduced from one source. Thus binarization with left corner is always to unary (RB1).  through right binarization. This, however, is equivalent to RB o LC, which is a very different grammar from LC o RB. Given our two binarization orientations (LB and RB), there are four possible compositions of binarization and LC transforms: (a) LB o LC (b) RB o LC (c) LC o LB (d) LC o RB Table 2 shows left-corner results over various conditions 6. Interestingly, options (a) and (d) encode the same information, leading to nearly identical performance 7. As stated before, right binarization moves the rule announce point from before to after all of the children. The LC transform is such that LC o RB also delays parent identification until after all of the children. The transform LC o RB o ANN moves the parent announce point back to the left corner by introducing unary rules at the left corner that simply identify the parent of the binarized rule.</Paragraph>
      <Paragraph position="3"> This allows us to test the effect of the position of the parent announce point on the performance of the parser. As we can see, however, the effect is slight, with similar performance on all measures.</Paragraph>
      <Paragraph position="4"> RB o LC performs with higher accuracy than the others when used with an exhaustive parser, but seems to require a massive beam in order to even approach performance at the MLP level.</Paragraph>
      <Paragraph position="5"> Manning and Carpenter (1997) used a beam width of 40,000 parses on the success heap at each input item, which must have resulted in an order of magnitude more rule expansions than  unary rules with RB.</Paragraph>
      <Paragraph position="6"> yet their average labelled precision and recall (.7875) still fell well below what we found to be the MLP accuracy (.7987) for the grammar. We are still investigating why this grammar functions so poorly when used by an incremental parser.</Paragraph>
    </Section>
    <Section position="3" start_page="424" end_page="425" type="sub_section">
      <SectionTitle>
3.3 Non-local annotation
</SectionTitle>
      <Paragraph position="0"> Johnson (1998) discusses the improvement of PCFG models via the annotation of non-local information onto non-terminal nodes in the trees of the training corpus. One simple example is to copy the parent node onto every nonterminal, e.g. the rule S ~ NP VP becomes S ~ NP~S VP~S. The idea here is that the distribution of rules of expansion of a particular non-terminal may differ depending on the nonterminal's parent. Indeed, it was shown that this additional information improves the MLP accuracy dramatically.</Paragraph>
      <Paragraph position="1"> We looked at two kinds of non-local information annotation: parent (PA) and left-corner (LCA). Left-corner parsing gives improved accuracy over top-down or bottom-up parsing with the same grammar. Why? One reason may be that the ancestor category exerts the same kind of non-local influence upon the parser that the parent category does in parent annotation. To test this, we annotated the left-corner ancestor category onto every leftmost non-terminal category. The results of our annotation trials are shown in table 3.</Paragraph>
      <Paragraph position="2"> There are two important points to notice from these results. First, with PA we get not only the previously reported improvement in accuracy, but additionally a fairly dramatic decrease in the number of parser states that must be visited to find a parse. That is, the non-local information not only improves the final product of the parse, but it guides the parser more quickly  to the final product. The annotated grammar has 1.5 times as many rules, and would slow a bottom-up CKY parser proportionally. Yet our parser actually considers far fewer states en route to the more accurate parse.</Paragraph>
      <Paragraph position="3"> Second, LC-annotation gives nearly all of the accuracy gain of left-corner parsing s, in support of the hypothesis that the ancestor information was responsible for the observed accuracy improvement. This result suggests that if we can determine the information that is being annotated by the troublesome RB o LC transform, we may be able to get the accuracy improvement with a relatively narrow beam. Parentannotation before the LC transform gave us the best performance of all, with very few states considered on average, and excellent accuracy for a non-lexicalized grammar.</Paragraph>
    </Section>
  </Section>
  <Section position="5" start_page="425" end_page="425" type="metho">
    <SectionTitle>
4 Accuracy/Efficiency tradeoff
</SectionTitle>
    <Paragraph position="0"> One point that deserves to be made is that there is something of an accuracy/efficiency tradeoff with regards to the base beam factor. The results given so far were at 10 -4 , which functions pretty well for the transforms we have investigated. Figures 2 and 3 show four performance measures for four of our transforms at base beam factors of 10 -3 , 10 -4 , 10 -5 , and 10 -6. There is a dramatically increasing efficiency burden as the beam widens, with varying degrees of payoff. With the top-down transforms (RB0 and PA o RB0), the ratio of the average probability to the MLP probability does improve substantially as the beam grows, yet with only marginal improvements in coverage and accuracy. Increasing the beam seems to do less with the left-corner transforms.</Paragraph>
    <Paragraph position="1"> SThe rest could very well be within noise.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML