File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/91/e91-1031_metho.xml

Size: 20,459 bytes

Last Modified: 2025-10-06 14:12:37

<?xml version="1.0" standalone="yes"?>
<Paper uid="E91-1031">
  <Title>Prediction in Chart Parsing Algorithms for Categorial Unification Grammar</Title>
  <Section position="3" start_page="0" end_page="0" type="metho">
    <SectionTitle>
CHART PARSING OF UNIFICATION GRAMMAR
</SectionTitle>
    <Paragraph position="0"> (UG). Parsing methods for context-free grammar can be extended to unification-based grammar formalisms (see Shieber, 1985 or Haas, 1989), and therefore they can in principle be used to parse CUG. A chart-parser scans a sentence from left to right, while entering items, representing (partial) derivations, in a chart.</Paragraph>
    <Paragraph position="1"> Assume that items are represented as Prolog terms of the form item(Begin, End, LH S, Parsed, ToParse), where LHS is a feature-structure and Parsed and ToParse contain lists of feature-structures.</Paragraph>
    <Paragraph position="2"> An item(O, 1, \[S\],\[NP\], \[V, NP\]) represents a partial derivation ranging from position 0 to 1 of a constituent with feature-structure S, of which a daughter NP has been found and of which daughters V and NP are still to be parsed. A word with lexical entry Word : Cat at position Begin, leads to addition of an item item(Begin, Begin + 1, Cat, \[Word\], \[ \]). Next, completion and prediction steps are called until no further items can be added to the chart.</Paragraph>
    <Paragraph position="3"> Completion step: I For each item(B, &amp;quot;. E, LHS, Parsed, \[NeztlToParse\]) and item(E, End, Next, Parsed, \[\]), add an item(B, End, LHS, Parsed+Next, ToParse).</Paragraph>
    <Paragraph position="4"> Bottom-up Prediction step: For each item(B, E, Next, Parsed, \[1), and each rule (LHS--~ \[Next I RHS\]), add item(B, E, LHS, \[Next\], RHS).</Paragraph>
    <Paragraph position="5"> The prediction step causes the algorithm to work bottom-up.</Paragraph>
  </Section>
  <Section position="4" start_page="0" end_page="2" type="metho">
    <SectionTitle>
2 The Problem
</SectionTitle>
    <Paragraph position="0"> In a bottom-up chart parser, applicable rules are predicted bottom-up, and thus, lexical information is used to constrain the addition of active items (i.e. items representing partial derivations). At first sight, this method appears to be ideal for CUG, as in CUG the lexical items contain syntactic information which is language and grammar specific, whereas the rules are generic in nature. Note, however, that although 1 In these and following definitions, we assume, unless other'wise indicated, that feature-structures denoted by identical prolog variables are unified by means of feature-unificatiom bottom-up parsing is certainly attractive for CUG, there are also a number of potential inefficiencies: In many cases useless items will be predicted.</Paragraph>
    <Paragraph position="1"> Consider, for instance, a grammar with a lexicon containing only the categories NP/N, N, and NP\S, and with application as the only combinatory rules. When encountering a determiner, prediction of an item(i,i, X, \[np/n\], \[(np/n)\X\]) is superfluous, since there is simply no way that the grammar could ever produce a category (np/n)\X  If the lexicon is highly ambiguous, many useless (partial) derivations may take place. Consider, for instance, the syntax of NPs in German, where determiners and adjectives are ambiguous with respect to case, declension pattern, gender and number (see Zwicky, 1986, for an analysis in terms of GPSG). The sentence die junge Frau schldfl has only one derivation, but a bottom-up parser has to consider 11 possible analyses for the word junge, 6 for the phrase junge Frau, 4 for die and 2 for die junge Frau. This example shows that even irk a pure categorial system, there may be situations where top-down prediction has its merits.</Paragraph>
    <Paragraph position="2"> If the grammar contains language or construction specific rules, bottom-up prediction may be less efficient. Relevant examples are the rule for form.</Paragraph>
    <Paragraph position="3"> ing bare plurals mentioned irk tile previous section and rules which implement a categorial version of gap-threading (see Pereira &amp; Shieber, 1986 : ll4 if). The rule shemata below allow for the derivation of sentences with a preposed element and for the extraction of arguments:  Gap-elimination: S --* X S\[gap : X\] Gap-introduction: X\[gap : Y\] ~ X/Y X\[gap : Y\] ---* Y\X  Oap-introduction will be used every time a funcfor category is encountered. Again, some form of top-down prediction could improve this situation.</Paragraph>
    <Paragraph position="4"> In the following sections, we will consider top-down parsing, as an alternative for the bottom-up approach, and we will consider the possibility of improving the predictive capabilities of a bottom-up parser.</Paragraph>
    <Paragraph position="5"> ~The example may suggest that prediction should be eliminated Ml together. This option is feasible only if the rule set is restricted to application.</Paragraph>
    <Paragraph position="6"> - 180 -</Paragraph>
  </Section>
  <Section position="5" start_page="2" end_page="2" type="metho">
    <SectionTitle>
3 Top-down Parsing
</SectionTitle>
    <Paragraph position="0"> Top-down chart parsing differs from the algorithm described above only in the prediction-step, which predicts applicable rules top-down. Contrary to bottom-up parsing, however, the adaptation of a top-down algorithm for UG requires some special care. For UGs which lack a so-called context-free back-bone, such as CUG, the top-down prediction step can only be guaranteed to terminate if we make use of restriction, as defined in Shieber (1985).</Paragraph>
    <Paragraph position="1"> Top-down prediction with a restrictor R (where R is a (finite) set of paths through a feature-structure) amounts to the following: Restriction The restriction of a feature-structure F relative to a restrictor R is the most specific feature-structure F ~ E_ F, such that every path in F j has either an atomic value or is an element of R.</Paragraph>
    <Paragraph position="2"> Predictor Step For each item(_ , End, LHS, Parsed, \[Next I ToParse\]) such that Rjve~, is the restriction of Next relative to R, and each rule RNe~:t ~ RHS, add item(i,i, Rge~:t, \[\], RHS).</Paragraph>
    <Paragraph position="3"> Restriction can be used to develop a top-down chart parser for CUG in which the (top-down) prediction step terminates. The result is unsatisfactory, however, for the following two reasons. First, as a consequence of the generic and language independent nature of categorial rules, the role of top-down prediction as a constraint on possible derivation steps is lost completely. Second, many useless items will be predicted due to the fact that the LHS of both rightward and leftward application always match with RJvext in the:prediction step (note that a bottom-up parser has a similar inefficiency for leftward application only). Therefore, the overhead which is introduced by top-down prediction does not pay-off. We conclude that, eventhough the introduction of restriction make it possible to parse CUG top-down, in practice, such a method has no advantages over a bottom-up approach.</Paragraph>
  </Section>
  <Section position="6" start_page="2" end_page="2" type="metho">
    <SectionTitle>
4 Lexicalist Prediction
</SectionTitle>
    <Paragraph position="0"> Instead of customizing existing top-down parsing algorithms for CUG, we can also try to take the opposite track. That is, we will try to represent a CUG in such a way that non-trivial forms of top-down prediction are possible.</Paragraph>
    <Paragraph position="1"> Top-down prediction, as described in the previous section, relies wholly on the syntactic information encoded in the syntactic rules. For CUG, this is an akward situation, as most syntactic information which could be relevant for top-down prediction is located in the lexicon. tn order to make this information accessible to the parser, we precompile the grammatical rules into a set of instantiated rules. The instantiated rules are more restrictive than the generic categorial rules, as they take lexical information into account.</Paragraph>
    <Paragraph position="2"> The following algorithm computes a set of instantiated syntactic rules, given a set of generic rules and a lexicon.</Paragraph>
    <Paragraph position="3"> Compilation For every category C, where C is either a lexical category or the LHS of an instantiated rule, and every (generic) rule GR, if C is utlifiable with the head-daughter of GR, add GR' (the result of the unification) to the set of instantiated rules, a We assume that there is some way of distinguishing head-daughters from non-head daughters (for instance, by means of a feature). The head daughter should be the daughter which has the most ialluellce on the instantiation of the rule. For the application rules, for instance, the functor is the most natural choice, as the functor both determines the instantiation of the resultant category and of the argument category.</Paragraph>
    <Paragraph position="4"> The compilation step is correct and complete for arbitrary UGs, that is, a string is derivable using the instantiated rules if and only if it is derivable using the generic rules. Note, however, that the compilation procedure does not necessarily terminate. Consider for instance a categorial gramrnar with category raising (X/(Y\X) ---, Y). In such a gramrnar, arbitrarily complex instantiations of this rule can be compiled. To avoid the creation of an infinite set of rules, we may again employ restriction: Compilation with restriction Let R be a restrictor.</Paragraph>
    <Paragraph position="5"> For every category C, where C is either a lexical category or the LHS of art instantiated rule, and every (generic) rule GR, if the restriction of C relative to R is unifiable with the head-daughter of GR, add GR ~ (the result of the unification) to the set of instantiated rules.</Paragraph>
    <Paragraph position="6"> The compilation step is guaranteed to terminate a.s long as R is finite (cf. Shieber, 1985). The compilation procedure is not specific to a certain grammar formalism or rule set, and thus can be used to compile arbitrary UGs. Such a compilation step will give rise to a substantially more instantiated rule set in all cases  where schematic grammar rules are used in combination with highly structured lexical items.</Paragraph>
    <Paragraph position="7"> For the compiled grammar, a standard top-down algorithm (such as the one in section 3) can be used. Prediction for CUG is now significant, as only rules which have a functor category that is actually derivable by the grammar will be predicted. So, starting from a category S, we will not predict leftmost categories such as S/NP, (S/NP)/NP, if no such categories can be derived from the lexical categories. Also, a leftmost argument category A will only be predicted if the grammar contains a matching functor category A~S. Finally, since we are working with the instantiated rules, morphosyntactic information can effectively be predicted top-down.</Paragraph>
    <Paragraph position="8"> Restriction is not only useful to guarantee termination of the compilation procedure. The precompilation procedure can in principle lead to an instantiated grammar that is considerably larger than the input grammar. For instance, given a grammar which distinguishes between plural and singular and between first, second and third person NPs, six versions of the rule S --~ NP NP\S might be derivable. Such a multiplication is unnecessary, however, as it does not provide any information which is useful for the top-down prediction step. Choosing a restrictor which filters out all distinctions that are irrelevant to top-down prediction, can prevent an explosion of the rule set.</Paragraph>
  </Section>
  <Section position="7" start_page="2" end_page="2" type="metho">
    <SectionTitle>
5 Bottom-Up Parsing with Pre-
</SectionTitle>
    <Paragraph position="0"> diction The compilation procedure described in section 4 was developed to improve the performance of top-down parsing-algorithms for lexicalist grammars of the CUGvariety. In this section, we argue that replacing a generic CUG with its instantiated.equivalent also has advantages for bottom-up parsing. There are two reasons to believe that this is so: first, predictions based on leftward application will be less frequent and second, to an instantiated grammar non-trivial forms of top-down prediction can be added.</Paragraph>
    <Paragraph position="1"> In section 2 we pointed out that a bottom-up parser will predict many useless instances of leftward application. This is due to the fact that the leftmost daughter of leftward application is completely general and thus, given an item(B, E, Cat, Parsed, I\]), an item(B,E, X, \[Cat\], \[Cat\X\]) will always be predicted. The compilation procedure presented in the previous section replaces leftward application with instantiated versions of this rule, in which the leftmost argument of the rule is instantiated. Although the instantiated rule set of a grammar is bound to be larger than the original rule set, which is a potential disadvantage, the chart will grow less fast if we use theinstantiated grammar. It is therefore worthwhile to investigate the performance of a bottom-up parser which uses a compiled grammar as opposed to a bottom-up parser working with a generic rule set.</Paragraph>
    <Paragraph position="2"> There is a Second reason for considering instantiated grammars. It is possible in bottom-up parsing to speed up the parsing process by adding top-down prediction. Top-down prediction is implemented with the help of a table containing items of the form left_corner(Ancestor, LeftCorner), which lists the left-corner relation for the grammar at hand. The left-corner relation is defined as follows: Left-corner Category C1 is a left-corner of an ancestor category A if there is a rule A ---* C1 .... C,. The relation is,transitive: if A is a left-corner of B and B a left-corner of C, A is a left-corner of C.</Paragraph>
    <Paragraph position="3"> Top-down filtering is now achieved by modifying the prediction step as follows :</Paragraph>
    <Section position="1" start_page="2" end_page="2" type="sub_section">
      <SectionTitle>
Bottom-up Prediction with Top-down Filtering:
</SectionTitle>
      <Paragraph position="0"> For each item(B, E, Cat, Parsed, \[\]), and each rule (Xo &amp;quot;-* \[Cat \[ RHS\]), such that there is an item(_, B, _, _, \[NeztlToParse\]) with Xo a left-corner of Next, add item(B, E, Xo, \[Cat\], RHS) 4.</Paragraph>
      <Paragraph position="1"> For CUG it makes little sense to compute a left-corner relation according to this definition, since any category X is a left-corner of any category Y (according to leftward application), and thus the left-corner relation can never have any predictive power.</Paragraph>
      <Paragraph position="2"> For an instantiated grammar, the situation is more promising. For instance, given the fact that only nomirmtive NPs occur as left-corner of S, and that every determiner which is the left-corner of NP, has a case feature which is compatible (unifiable) with that NP, it can be concluded that only nominative determiners can be left-corners of S.</Paragraph>
      <Paragraph position="3"> Computing the left-corner relation mechanichally for a UG will not always lead to the most economica |representation of the left-corner table. For exampie, in German the left-corner of an NP with case and number features X will be a determiner with identi:  prediction is closely related to the BUP-parser of Matsumoto et al. (1983). The BUP-parser is based on definite clause grammar and thus, may backtrack. Minimal use is made of a chart (in which successful and failed parse attempts are stored). Our algorithm assigns a more important role to the chart and thus avoids backtracking.</Paragraph>
      <Paragraph position="4">  instantiated grammar, we get 8 versions (i.e. 4 cases times 2 possible values for number) of this relation.</Paragraph>
      <Paragraph position="5"> Similar observations can be made for adjectives that are left-corners of N (where things are even worse, as we would like to take declension classes into account as well). This multiplication may lead to a needlessly large left-corner table, which, if used in the prediction step, may in fact lead to sharp decreases in parsing performanee (see also Haas, 1989, who encountered similar problems). Note that checking a left-corner table containing feature-structures is in general expensive, as unification, rather than identity-tests, have to be carried out.</Paragraph>
      <Paragraph position="6"> To avoid tMs problem we have found it necessary to construct the left-corner table by hand, using linguistic meta.knowledge about what is relevant, given a particular left-corner relation, to top-down prediction to compress the table to an absolute minimum. It turns out to be the case that only in this way the effect of top-down filtering will pay-off against the increased overhead of having to check the left-corner table.</Paragraph>
    </Section>
  </Section>
  <Section position="8" start_page="2" end_page="2" type="metho">
    <SectionTitle>
6 Some Results
</SectionTitle>
    <Paragraph position="0"> The performance of the parsing algorithms discussed in the preceding sections (a bottom-up parser for UG (BU), a top-down parser for UG (of Shieber, 1985) (TD), a top-down parser operating on an instantiated grammar (TD/1), and a bottom-up parser with top-down filtering operating on an instantiated grammar (BU/LC)) were tested on two experimental CUGs, one implementing the morphosyntactic features of German N Ps, and one implementing the syntax of WH-questions in Dutch by means of a gap-threading mechanism.</Paragraph>
    <Paragraph position="1"> Some illustrative results are listed in Tables 1 and 2.</Paragraph>
    <Paragraph position="2">  For German, an ideal restrictor R was {&lt; l* &gt; II = cat,val, arg, or dir}. This restrictor effectively filters out all morphosyntactic information, in as far as it is not repeated in the categorial rules. The resulting precompiled grammar is much smaller than in the case where no restriction was used or where morphosyntactic information was not completely filtered out. A categorial lexicon for German, for instance, containing only determiners, adjectives, nouns, and transitive and intransitive verbs, will give rise to more than 60 instantiated rules if precompiled without restriction, whereas only four rules are computed if R is used (i.e. only two more than in the uncompiled (categorial) grammar). The improvement in efficiency of TD/I over TD is due to the fact that no useless instances of leftward application are predicted and to the fact that no restriction is needed during parsing with an instantiated grammar.</Paragraph>
    <Paragraph position="3"> Thus, prediction based on already processed material can be maximal. As soon as we have parsed a category N P/N\[+sg, +wk, +dat, +fern\], for instance, top-down prediction will add only those items that have N\[+sg, +wk, +dat, +fern\] as LHS.</Paragraph>
    <Paragraph position="4"> BU is almost, as efficient as TD/I, eventhough it works with a generic grammar, and thus produces (significantly) more chart-items. Once we replace the generic grammar by an instantiated grammar, and add left-corner relationships (BU/LC), the predictive capacities of the parser are maximal, and a sharp decrease in the number of chart items and parse times occurs.</Paragraph>
    <Paragraph position="5">  For the grammar with gap-threading (table 2), we used a restrictor R = {&lt; 1 deg &gt; II = eat,val, arg,dir, gap, in or out}. The TD parser encounters serious difficulties in this case, whereas TD/I performs significantly better, but still is rather inefficient. There is a distinct difference between BU and BU/LC if we look at the number of chart items, although the difference is less marked than in the case of German. In terms of parse times the two algorithms are almost equivalent.</Paragraph>
    <Paragraph position="6"> Comparing our results with those of Shieber (1985) and Haas (1989), we see that in all cases top-down filtering may reduce the size of the chart significantly. Whereas Haas (1989) found that top-down filtering never helps to actually decrease parse times in a bottom-up parser, we have found at least one example (German) where top-down filtering is useful.</Paragraph>
    <Paragraph position="7"> - 183 -</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML