File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/02/p02-1017_metho.xml

Size: 23,150 bytes

Last Modified: 2025-10-06 14:07:56

<?xml version="1.0" standalone="yes"?>
<Paper uid="P02-1017">
  <Title>A Generative Constituent-Context Model for Improved Grammar Induction</Title>
  <Section position="4" start_page="0" end_page="543210" type="metho">
    <SectionTitle>
3 A Generative Constituent-Context Model
</SectionTitle>
    <Paragraph position="0"> To exploit the benefits of parameter search, we used a novel model which is designed specifically to enable a more felicitous search space. The fundamental assumption is a much weakened version of classic linguistic constituency tests (Radford, 1988): constituents appear in constituent contexts. A particular linguistic phenomenon that the system exploits is that long constituents often have short, common equivalents, or proforms, which appear in similar contexts and whose constituency is easily discovered (or guaranteed). Our model is designed to transfer the constituency of a sequence directly to its containing context, which is intended to then pressure new sequences that occur in that context into being parsed as constituents in the next round.</Paragraph>
    <Paragraph position="1"> The model is also designed to exploit the successes of distributional clustering, and can equally well be viewed as doing distributional clustering in the presence of no-overlap constraints.</Paragraph>
    <Section position="1" start_page="0" end_page="543210" type="sub_section">
      <SectionTitle>
3.1 Constituents and Contexts
</SectionTitle>
      <Paragraph position="0"> Unlike a PCFG, our model describes all contiguous subsequences of a sentence (spans), including empty spans, whether they are constituents or non-constituents (distituents). A span encloses a sequence of terminals, or yield, , such as DT JJ NN.</Paragraph>
      <Paragraph position="1"> A span occurs in a context x, such as -VBZ, where x is the ordered pair of preceding and following terminals ( denotes a sentence boundary). A bracketing of a sentence is a boolean matrix B, which indicates which spans are constituents and which are not. Figure 1 shows a parse of a short sentence, the bracketing corresponding to that parse, and the labels, yields, and contexts of its constituent spans.</Paragraph>
      <Paragraph position="2"> Figure 2 shows several bracketings of the sentence in figure 1. A bracketing B of a sentence is non-crossing if, whenever two spans cross, at most one is a constituent in B. A non-crossing bracketing is tree-equivalent if the size-one terminal spans and the full-sentence span are constituents, and all size-zero spans are distituents. Figure 2(a) and (b) are tree-equivalent. Tree-equivalent bracketings B correspond to (unlabeled) trees in the obvious way.</Paragraph>
      <Paragraph position="3"> A bracketing is binary if it corresponds to a binary tree. Figure 2(b) is binary. We will induce trees by inducing tree-equivalent bracketings.</Paragraph>
      <Paragraph position="4"> Our generative model over sentences S has two phases. First, we choose a bracketing B according to some distribution P.B/ and then generate the sentence given that bracketing: P.S; B/ D P.B/P.SjB/ Given B, we fill in each span independently. The context and yield of each span are independent of each other, and generated conditionally on the constituency Bij of that span.</Paragraph>
      <Paragraph position="6"> hi; ji P. ij jBij/P.xij jBij/ The distribution P. ij jBij/ is a pair of multinomial distributions over the set of all possible yields: one for constituents (Bij D c) and one for distituents (Bij D d). Similarly for P.xij jBij/ and contexts.</Paragraph>
      <Paragraph position="7"> The marginal probability assigned to the sentence S is given by summing over all possible bracketings of S: P.S/ D PB P.B/P.SjB/.2 To induce structure, we run EM over this model, treating the sentences S as observed and the bracketings B as unobserved. The parameters 2 of 2Viewed as a model generating sentences, this model is deficient, placing mass on yield and context choices which will not tile into a valid sentence, either because specifications for positions conflict or because yields of incorrect lengths are chosen. However, we can renormalize by dividing by the mass placed on proper sentences and zeroing the probability of improper bracketings. The rest of the paper, and results, would be unchanged except for notation to track the renormalization constant.</Paragraph>
      <Paragraph position="8">  contains a h0,3i bracket crossing that VP bracket.</Paragraph>
      <Paragraph position="9"> the model are the constituency-conditional yield and context distributions P. jb/ and P.xjb/. If P.B/ is uniform over all (possibly crossing) bracketings, then this procedure will be equivalent to softclustering with two equal-prior classes.</Paragraph>
      <Paragraph position="10"> There is reason to believe that such soft clusterings alone will not produce valuable distinctions, even with a significantly larger number of classes.</Paragraph>
      <Paragraph position="11"> The distituents must necessarily outnumber the constituents, and so such distributional clustering will result in mostly distituent classes. Clark (2001) finds exactly this effect, and must resort to a filtering heuristic to separate constituent and distituent clusters. To underscore the difference between the bracketing and labeling tasks, consider figure 3. In both plots, each point is a frequent tag sequence, assigned to the (normalized) vector of its context frequencies.</Paragraph>
      <Paragraph position="12"> Each plot has been projected onto the first two principal components of its respective data set. The left plot shows the most frequent sequences of three constituent types. Even in just two dimensions, the clusters seem coherent, and it is easy to believe that they would be found by a clustering algorithm in the full space. On the right, sequences have been labeled according to whether their occurrences are constituents more or less of the time than a cutoff (of 0.2). The distinction between constituent and distituent seems much less easily discernible.</Paragraph>
      <Paragraph position="13"> We can turn what at first seems to be distributional clustering into tree induction by confining P.B/ to put mass only on tree-equivalent bracketings. In particular, consider Pbin.B/ which is uniform over binary bracketings and zero elsewhere. If we take this bracketing distribution, then when we sum over data completions, we will only involve bracketings which correspond to valid binary trees. This restriction is the basis for our algorithm.</Paragraph>
    </Section>
    <Section position="2" start_page="543210" end_page="543210" type="sub_section">
      <SectionTitle>
3.2 The Induction Algorithm
</SectionTitle>
      <Paragraph position="0"> We now essentially have our induction algorithm.</Paragraph>
      <Paragraph position="1"> We take P.B/ to be Pbin.B/, so that all binary trees are equally likely. We then apply the EM algorithm: E-Step: Find the conditional completion likelihoods P.BjS;2/ according to the current 2.</Paragraph>
      <Paragraph position="2"> M-Step: Fix P.BjS;2/ and find the 20 which maximizes PB P.BjS;2/ log P.S; Bj20/.</Paragraph>
      <Paragraph position="3"> The completions (bracketings) cannot be efficiently enumerated, and so a cubic dynamic program similar to the inside-outside algorithm is used to calculate the expected counts of each yield and context, both as constituents and distituents. Relative frequency estimates (which are the ML estimates for this model) are used to set 20.</Paragraph>
      <Paragraph position="4"> To begin the process, we did not begin at the E-step with an initial guess at 2. Rather, we began at the M-step, using an initial distribution over completions. The initial distribution was not the uniform distribution over binary trees Pbin.B/. That was undesirable as an initial point because, combinatorily, almost all trees are relatively balanced. On the other hand, in language, we want to allow unbalanced structures to have a reasonable chance to be discovered. Therefore, consider the following uniformsplitting process of generating binary trees over k terminals: choose a split point at random, then recursively build trees by this process on each side of the split. This process gives a distribution Psplit which puts relatively more weight on unbalanced trees, but only in a very general, non language-specific way.</Paragraph>
      <Paragraph position="5"> This distribution was not used in the model itself, however. It seemed to bias too strongly against balanced structures, and led to entirely linear-branching structures.</Paragraph>
      <Paragraph position="6"> The smoothing used was straightforward. For each yield or context x, we added 10 counts of that item as a constituent and 50 as a distituent. This reflected the relative skew of random spans being more likely to be distituents. This contrasts with our previous work, which was sensitive to smoothing method, and required a massive amount of it.</Paragraph>
    </Section>
  </Section>
  <Section position="5" start_page="543210" end_page="543210" type="metho">
    <SectionTitle>
4 Experiments
</SectionTitle>
    <Paragraph position="0"> We performed most experiments on the 7422 sentences in the Penn treebank Wall Street Journal section which contained no more than 10 words after the removal of punctuation and null elements (WSJ-10). Evaluation was done by measuring unlabeled precision, recall, and their harmonic mean F1 against the treebank parses. Constituents which could not be gotten wrong (single words and entire sentences) were discarded.3 The basic experiments, as described above, do not label constituents. An advantage to having only a single constituent class is that it encourages constituents of one type to be found even when they occur in a context which canonically holds another type. For example, NPs and PPs both occur between a verb and the end of the sentence, and they can transfer constituency to each other through that context.</Paragraph>
    <Paragraph position="1"> Figure 4 shows the F1 score for various methods of parsing. RANDOM chooses a tree uniformly 3Since reproducible evaluation is important, a few more notes: this is different from the original (unlabeled) bracketing measures proposed in the PARSEVAL standard, which did not count single words as constituents, but did give points for putting a bracket over the entire sentence. Secondly, bracket labels and multiplicity are just ignored. Below, we also present results using the EVALB program for comparability, but we note that while one can get results from it that ignore bracket labels, it never ignores bracket multiplicity. Both these alternatives seem less satisfactory to us as measures for evaluating unsupervised constituency decisions.</Paragraph>
    <Paragraph position="3"> size. The drop in precision for span length 2 is largely due to analysis inside NPs which is omitted by the treebank. Also shown is F1 for the induced PCFG. The PCFG shows higher accuracy on small spans, while the CCM is more even.</Paragraph>
    <Paragraph position="4"> at random from the set of binary trees.4 This is the unsupervised baseline. DEP-PCFG is the result of duplicating the experiments of Carroll and Charniak (1992), using EM to train a dependencystructured PCFG. LBRANCH and RBRANCH choose the left- and right-branching structures, respectively.</Paragraph>
    <Paragraph position="5"> RBRANCH is a frequently used baseline for supervised parsing, but it should be stressed that it encodes a significant fact about English structure, and an induction system need not beat it to claim a degree of success. CCM is our system, as described above. SUP-PCFG is a supervised PCFG parser trained on a 90-10 split of this data, using the treebank grammar, with the Viterbi parse rightbinarized.5 UBOUND is the upper bound of how well a binary system can do against the treebank sentences, which are generally flatter than binary, limiting the maximum precision.</Paragraph>
    <Paragraph position="6"> CCM is doing quite well at 71.1%, substantially better than right-branching structure. One common issue with grammar induction systems is a tendency to chunk in a bottom-up fashion. Especially since  accuracy is due to a high accuracy on short-span constituents. Figure 5 shows that this is not true.</Paragraph>
    <Paragraph position="7"> Recall drops slightly for mid-size constituents, but longer constituents are as reliably proposed as short ones. Another effect illustrated in this graph is that, for span 2, constituents have low precision for their recall. This contrast is primarily due to the single largest difference between the system's induced structures and those in the treebank: the treebank does not parse into NPs such as DT JJ NN, while our system does, and generally does so correctly, identifying N units like JJ NN. This overproposal drops span-2 precision. In contrast, figure 5 also shows the F1 for DEP-PCFG, which does exhibit a drop in F1 over larger spans.</Paragraph>
    <Paragraph position="8"> The top row of figure 8 shows the recall of non-trivial brackets, split according the brackets' labels in the treebank. Unsurprisingly, NP recall is highest, but other categories are also high. Because we ignore trivial constituents, the comparatively low S represents only embedded sentences, which are somewhat harder even for supervised systems.</Paragraph>
    <Paragraph position="9"> To facilitate comparison to other recent work, figure 6 shows the accuracy of our system when trained on the same WSJ data, but tested on the ATIS corpus, and evaluated according to the EVALB program.6 The F1 numbers are lower for this corpus and evaluation method.7 Still, CCM beats not only RBRANCH (by 8.3%), but also the previous conditional COND-CCM and the next closest unsupervised  is replete with span-one NPs; adding an extra bracket around all single words raises our EVALB recall to 71.9; removing all unaries from the ATIS gold standard gives an F1 of 63.3%.</Paragraph>
  </Section>
  <Section position="6" start_page="543210" end_page="543210" type="metho">
    <SectionTitle>
Rank Overproposed Underproposed
1 JJ NN NNP POS
2 MD VB TO CD CD
3 DT NN NN NNS
4 NNP NNP NN NN
5 RB VB TO VB
6 JJ NNS IN CD
7 NNP NN NNP NNP POS
8 RB VBN DT NN POS
9 IN NN RB CD
10 POS NN IN DT
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="543210" end_page="543210" type="sub_section">
      <SectionTitle>
4.1 Error Analysis
</SectionTitle>
      <Paragraph position="0"> Parsing figures can only be a component of evaluating an unsupervised induction system. Low scores may indicate systematic alternate analyses rather than true confusion, and the Penn treebank is a sometimes arbitrary or even inconsistent gold standard. To give a better sense of the kinds of errors the system is or is not making, we can look at which sequences are most often over-proposed, or most often under-proposed, compared to the treebank parses.</Paragraph>
      <Paragraph position="1">  forms MD VB verb groups systematically, and it attaches the possessive particle to the right, like a determiner, rather than to the left.8 It provides binary-branching analyses within NPs, normally resulting in correct extra N constituents, like JJ NN, which are not bracketed in the treebank. More seriously, it tends to attach post-verbal prepositions to the verb and gets confused by long sequences of nouns. A significant improvement over earlier systems is the absence of subject-verb groups, which disappeared when we switched to Psplit.B/ for initial completions; the more balanced subject-verb analysis had a substantial combinatorial advantage with Pbin.B/.</Paragraph>
    </Section>
    <Section position="2" start_page="543210" end_page="543210" type="sub_section">
      <SectionTitle>
4.2 Multiple Constituent Classes
</SectionTitle>
      <Paragraph position="0"> We also ran the system with multiple constituent classes, using a slightly more complex generative model in which the bracketing generates a labeling which then generates the constituents and contexts.</Paragraph>
      <Paragraph position="1"> The set of labels for constituent spans and distituent spans are forced to be disjoint.</Paragraph>
      <Paragraph position="2"> Intuitively, it seems that more classes should help, 8Linguists have at times argued for both analyses: Halliday (1994) and Abney (1987), respectively.</Paragraph>
      <Paragraph position="3"> by allowing the system to distinguish different types of constituents and constituent contexts. However, it seemed to slightly hurt parsing accuracy overall.</Paragraph>
      <Paragraph position="4"> Figure 8 compares the performance for 2 versus 12 classes; in both cases, only one of the classes was allocated for distituents. Overall F1 dropped very slightly with 12 classes, but the category recall numbers indicate that the errors shifted around substantially. PP accuracy is lower, which is not surprising considering that PPs tend to appear rather optionally and in contexts in which other, easier categories also frequently appear. On the other hand, embedded sentence recall is substantially higher, possibly because of more effective use of the top-level sentences which occur in the signature context - .</Paragraph>
      <Paragraph position="5"> The classes found, as might be expected, range from clearly identifiable to nonsense. Note that simply directly clustering all sequences into 12 categories produced almost entirely the latter, with clusters representing various distituent types. Figure 9 shows several of the 12 classes. Class 0 is the model's distituent class. Its most frequent members are a mix of obvious distituents (IN DT, DT JJ, IN DT, NN VBZ) and seemingly good sequences like NNP NNP. However, there are many sequences of 3 or more NNP tags in a row, and not all adjacent pairs can possibly be constituents at the same time.</Paragraph>
      <Paragraph position="6"> Class 1 is mainly common NP sequences, class 2 is proper NPs, class 3 is NPs which involve numbers, and class 6 is N sequences, which tend to be linguistically right but unmarked in the treebank. Class 4 is a mix of seemingly good NPs, often from positions like VBZ-NN where they were not constituents, and other sequences that share such contexts with otherwise good NP sequences. This is a danger of not jointly modeling yield and context, and of not modeling any kind of recursive structure. Class 5 is mainly composed of verb phrases and verb groups.</Paragraph>
      <Paragraph position="7"> No class corresponded neatly to PPs: perhaps because they have no signature contexts. The 2-class model is effective at identifying them only because they share contexts with a range of other constituent types (such as NPs and VPs).</Paragraph>
    </Section>
    <Section position="3" start_page="543210" end_page="543210" type="sub_section">
      <SectionTitle>
4.3 Induced Parts-of-Speech
</SectionTitle>
      <Paragraph position="0"> A reasonable criticism of the experiments presented so far, and some other earlier work, is that we assume treebank part-of-speech tags as input. This  supervised PCFGs do not perform nearly so well with their input delexicalized. We may be reducing data sparsity and making it easier to see a broad picture of the grammar, but we are also limiting how well we can possibly do. It is certainly worth exploring methods which supplement or replace tagged input with lexical input. However, we address here the more serious criticism: that our results stem from clues latent in the treebank tagging information which are conceptually posterior to knowledge of structure. For instance, some treebank tag distinctions, such as particle (RP) vs. preposition (IN) or predeterminer (PDT) vs. determiner (DT) or adjective (JJ), could be said to import into the tagset distinctions that can only be made syntactically.</Paragraph>
      <Paragraph position="1"> To show results from a complete grammar induction system, we also did experiments starting with a clustering of the words in the treebank. We used basically the baseline method of word type clustering in (Sch&amp;quot;utze, 1995) (which is close to the methods of (Finch, 1993)). For (all-lowercased) word types in the Penn treebank, a 1000 element vector was made by counting how often each co-occurred with each of the 500 most common words immediately to the left or right in Treebank text and additional 1994-96 WSJ newswire. These vectors were length-normalized, and then rank-reduced by an SVD, keeping the 50 largest singular vectors.</Paragraph>
      <Paragraph position="2"> The resulting vectors were clustered into 200 word classes by a weighted k-means algorithm, and then grammar induction operated over these classes. We do not believe that the quality of our tags matches that of the better methods of Sch&amp;quot;utze (1995), much less the recent results of Clark (2000). Nevertheless, using these tags as input still gave induced structure substantially above right-branching. Figure 8 shows  the performance with induced tags compared to correct tags. Overall F1 has dropped, but, interestingly, VP and S recall are higher. This seems to be due to a marked difference between the induced tags and the treebank tags: nouns are scattered among a disproportionally large number of induced tags, increasing the number of common NP sequences, but decreasing the frequency of each.</Paragraph>
    </Section>
    <Section position="4" start_page="543210" end_page="543210" type="sub_section">
      <SectionTitle>
4.4 Convergence and Stability
</SectionTitle>
      <Paragraph position="0"> Another issue with previous systems is their sensitivity to initial choices. The conditional model of Klein and Manning (2001b) had the drawback that the variance of final F1, and qualitative grammars found, was fairly high, depending on small differences in first-round random parses. The model presented here does not suffer from this: while it is clearly sensitive to the quality of the input tagging, it is robust with respect to smoothing parameters and data splits. Varying the smoothing counts a factor of ten in either direction did not change the overall F1 by more than 1%. Training on random subsets of the training data brought lower performance, but constantly lower over equal-size splits. Moreover, there are no first-round random decisions to be sensitive to; the soft EM procedure is deterministic.</Paragraph>
      <Paragraph position="1">  Figure 10 shows the overall F1 score and the data likelihood according to our model during convergence.9 Surprisingly, both are non-decreasing as the system iterates, indicating that data likelihood in this model corresponds well with parse accuracy.10 Figure 11 shows recall for various categories by iteration. NP recall exhibits the more typical pattern of a sharp rise followed by a slow fall, but the other categories, after some initial drops, all increase until convergence. These graphs stop at 40 iterations. The system actually converged in both likelihood and F1 by iteration 38, to within a tolerance of 10 10. The time to convergence varied according to smoothing amount, number of classes, and tags used, but the system almost always converged within 80 iterations, usually within 40.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML