File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/p06-1072_metho.xml

Size: 23,344 bytes

Last Modified: 2025-10-06 14:10:16

<?xml version="1.0" standalone="yes"?>
<Paper uid="P06-1072">
  <Title>Annealing Structural Bias in Multilingual Weighted Grammar Induction[?]</Title>
  <Section position="5" start_page="569" end_page="570" type="metho">
    <SectionTitle>
3 Locality Bias among Trees
</SectionTitle>
    <Paragraph position="0"> Hidden-variable estimation algorithms-including EM--typically work by iteratively manipulating the model parameters Th to improve an objective function F(Th). EM explicitly alternates between the computation of a posterior distribution over hypotheses, pTh(y  |x) (where y is any tree with yield x), and computing a new parameter estimate Th.3 2A projective parser could achieve perfect accuracy on our English and Mandarin datasets, &gt; 99% on Bulgarian, Turkish, and Portuguese, and &gt; 98% on German.</Paragraph>
    <Paragraph position="1"> 3For weighted grammar-based models, the posterior does not need to be explicitly represented; instead expectations under pTh are used to compute updates to Th.</Paragraph>
    <Paragraph position="2">  with a locality bias at varying d. Each curve corresponds to a different language and shows performance of supervised model selection within a given d, across l and Th(0) values. (See Table 3 for performance of models selected across ds.) We decode with d = 0, though we found that keeping the training-time value of d would have had almost no effect. The EM baseline corresponds to d = 0.</Paragraph>
    <Paragraph position="3"> One way to bias a learner toward local explanations is to penalize longer attachments. This was done for supervised parsing in different ways by Collins (1997), Klein and Manning (2003), and McDonald et al. (2005), all of whom considered intervening material or coarse distance classes when predicting children in a tree. Eisner and Smith (2005) achieved speed and accuracy improvements by modeling distance directly in a ML-estimated (deficient) generative model.</Paragraph>
    <Paragraph position="4"> Here we use string distance to measure the length of a dependency link and consider the inclusion of a sum-of-lengths feature in the probabilistic model, for learning only. Keeping our original model, we will simply multiply into the probability of each tree another factor that penalizes long dependencies, giving:</Paragraph>
    <Paragraph position="6"> d = 0, we have the original model. As d - [?][?], the new model pprimeTh will favor parses with shorter dependencies. The dynamic programming algorithms remain the same as before, with the appropriate ed|i[?]j |factor multiplied in at each attachment between xi and xj. Note that when d = 0, pprimeTh [?] pTh.</Paragraph>
    <Paragraph position="7"> Experiment. We applied a locality bias to the same dependency model by setting d to different  with structural annealing on the distance weight d. Here we show performance with add-10 smoothing, the all-zero initializer, for three languages with three different initial values d0. Time progresses from left to right. Note that it is generally best to start at d0 lessmuch0; note also the importance of picking the right point on the curve to stop. See Table 3 for performance of models selected across smoothing, initialization, starting, and stopping choices, in all six languages. values in [[?]1,0.2] (see Eq. 2). The same initializers Th(0) and smoothing conditions were tested.</Paragraph>
    <Paragraph position="8"> Performance of supervised model selection among models trained at different d values is plotted in Fig. 1. When a model is selected across all conditions (3 initializers x 6 smoothing values x 7 ds) using annotated development data, performance is notably better than the EM baseline using the same selection procedure (see Table 3, second column).</Paragraph>
  </Section>
  <Section position="6" start_page="570" end_page="570" type="metho">
    <SectionTitle>
4 Structural Annealing
</SectionTitle>
    <Paragraph position="0"> The central idea of this paper is to gradually change (anneal) the bias d. Early in learning, local dependencies are emphasized by setting d lessmuch 0.</Paragraph>
    <Paragraph position="1"> Then d is iteratively increased and training repeated, using the last learned model to initialize.</Paragraph>
    <Paragraph position="2"> This idea bears a strong similarity to deterministic annealing (DA), a technique used in clustering and classification to smooth out objective functions that are piecewise constant (hence discontinuous) or bumpy (non-concave) (Rose, 1998; Ueda and Nakano, 1998). In unsupervised learning, DA iteratively re-estimates parameters like EM, but begins by requiring that the entropy of the posterior pTh(y  |x) be maximal, then gradually relaxes this entropy constraint. Since entropy is concave in Th, the initial task is easy (maximize a concave, continuous function). At each step the optimization task becomes more difficult, but the initializer is given by the previous step and, in practice, tends to be close to a good local maximum of the more difficult objective. By the last iteration the objective is the same as in EM, but the annealed search process has acted like a good initializer. This method was applied with some success to grammar induction models by Smith and Eisner (2004).</Paragraph>
    <Paragraph position="3"> In this work, instead of imposing constraints on the entropy of the model, we manipulate bias toward local hypotheses. As d increases, we penalize long dependencies less. We call this structural annealing, since we are varying the strength of a soft constraint (bias) on structural hypotheses. In structural annealing, the final objective would be the same as EM if our final d, df = 0, but we found that annealing farther (df &gt; 0) works much better.4 Experiment: Annealing d. We experimented with annealing schedules for d. We initialized at d0 [?] {[?]1,[?]0.4,[?]0.2}, and increased d by 0.1 (in the first case) or 0.05 (in the others) up to df = 3.</Paragraph>
    <Paragraph position="4"> Models were trained to convergence at each depoch. Model selection was applied over the same initialization and regularization conditions as before, d0, and also over the choice of df, with stopping allowed at any stage along the d trajectory.</Paragraph>
    <Paragraph position="5"> Trajectories for three languages with three different d0 values are plotted in Fig. 2. Generally speaking, d0 lessmuch 0 performs better. There is consistently an early increase in performance as d increases, but the stopping df matters tremendously.</Paragraph>
    <Paragraph position="6"> Selected annealed-d models surpass EM in all six languages; see the third column of Table 3. Note that structural annealing does not always outperform fixed-d training (English and Portuguese).</Paragraph>
    <Paragraph position="7"> This is because we only tested a few values of d0, since annealing requires longer runtime.</Paragraph>
  </Section>
  <Section position="7" start_page="570" end_page="572" type="metho">
    <SectionTitle>
5 Structural Bias via Segmentation
</SectionTitle>
    <Paragraph position="0"> A related way to focus on local structure early in learning is to broaden the set of hypotheses to include partial parse structures. If x = &lt;x1,x2,...,xn&gt; , the standard approach assumes that x corresponds to the vertices of a single dependency tree. Instead, we entertain every hypothesis in which x is a sequence of yields from separate, independently-generated trees. For example,  a bias toward longer attachments. A more apt description in the context of annealing is to say that during early stages the learner starts liking local attachments too much, and we need to exaggerate d to &amp;quot;coax&amp;quot; it to new hypotheses. See Fig. 2.  with structural annealing on the breakage weight b. Here we show performance with add-10 smoothing, the all-zero initializer, for three languages with three different initial values b0. Time progresses from left (large b) to right. See Table 3 for performance of models selected across smoothing, initialization, and stopping choices, in all six languages. yield of a second, and &lt;x6,...,xn&gt; is the yield of a third. One extreme hypothesis is that x is n singlenode trees. At the other end of the spectrum is the original set of hypotheses--full trees on x. Each has a nonzero probability.</Paragraph>
    <Paragraph position="1"> Segmented analyses are intermediate representations that may be helpful for a learner to use to formulate notions of probable local structure, without committing to full trees.5 We only allow unobserved breaks, never positing a hard segmentation of the training sentences. Over time, we increase the bias against broken structures, forcing the learner to commit most of its probability mass to full trees.</Paragraph>
    <Section position="1" start_page="571" end_page="571" type="sub_section">
      <SectionTitle>
5.1 Vine Parsing
</SectionTitle>
      <Paragraph position="0"> At first glance broadening the hypothesis space to entertain all 2n[?]1 possible segmentations may seem expensive. In fact the dynamic programming computation is almost the same as summing or maximizing over connected dependency trees. For the latter, we use an inside-outside algorithm that computes a score for every parse tree by computing the scores of items, or partial structures, through a bottom-up process. Smaller items are built first, then assembled using a set of rules defining how larger items can be built.6 Now note that any sequence of partial trees over x can be constructed by combining the same items into trees. The only difference is that we  used in the experiments.</Paragraph>
      <Paragraph position="1"> are willing to consider unassembled sequences of these partial trees as hypotheses, in addition to the fully connected trees. One way to accomplish this in terms of yright(0) is to say that the root, $, is allowed to have multiple children, instead of just one. Here, these children are independent of each other (e.g., generated by a uni-gram Markov model). In supervised dependency parsing, Eisner and Smith (2005) showed that imposing a hard constraint on the whole structure-specifically that each non-$ dependency arc cross fewer than k words--can give guaranteed O(nk2) runtime with little to no loss in accuracy (for simple models). This constraint could lead to highly contrived parse trees, or none at all, for some sentences--both are avoided by the allowance of segmentation into a sequence of trees (each attached to $). The construction of the &amp;quot;vine&amp;quot; (sequence of $'s children) takes only O(n) time once the chart has been assembled.</Paragraph>
      <Paragraph position="2"> Our broadened hypothesis model is a probabilistic vine grammar with a unigram model over $'s children. We allow (but do not require) segmentation of sentences, where each independent child of $ is the root of one of the segments. We do not impose any constraints on dependency length.</Paragraph>
    </Section>
    <Section position="2" start_page="571" end_page="572" type="sub_section">
      <SectionTitle>
5.2 Modeling Segmentation
</SectionTitle>
      <Paragraph position="0"> Now the total probability of an n-length sentence x, marginalizing over its hidden structures, sums up not only over trees, but over segmentations of x. For completeness, we must include a probability model over the number of trees generated, which could be anywhere from 1 to n. The model over the number T of trees given a sentence of length n will take the following log-linear form:</Paragraph>
      <Paragraph position="2"> where b [?] R is the sole parameter. When b = 0, every value of T is equally likely. For b lessmuch 0, the model prefers larger structures with few breaks.</Paragraph>
      <Paragraph position="3"> At the limit (b - [?][?]), we achieve the standard learning setting, where the model must explain x using a single tree. We start however at b greatermuch 0, where the model prefers smaller trees with more breaks, in the limit preferring each word in x to be its own tree. We could describe &amp;quot;brokenness&amp;quot; as a feature in the model whose weight, b, is chosen extrinsically (and time-dependently), rather than empirically--just as was done with d.</Paragraph>
      <Paragraph position="4">  model selection among values of s2 and Th(0) worst unsup. sup. oracle  borhoods and with different levels of regularization. Bold-face marks scores better than EM-trained models selected the same way (Table 1). The score is the F1 measure on non-$ attachments.</Paragraph>
      <Paragraph position="5"> Annealing b resembles the popular bootstrapping technique (Yarowsky, 1995), which starts out aiming for high precision, and gradually improves coverage over time. With strong bias (b greatermuch 0), we seek a model that maintains high dependency precision on (non-$) attachments by attaching most tags to $. Over time, as this is iteratively weakened (b - [?][?]), we hope to improve coverage (dependency recall). Bootstrapping was applied to syntax learning by Steedman et al. (2003). Our approach differs in being able to remain partly agnostic about each tag's true parent (e.g., by giving 50% probability to attaching to $), whereas Steedman et al. make a hard decision to retrain on a whole sentence fully or leave it out fully. In earlier work, Brill and Marcus (1992) adopted a &amp;quot;local first&amp;quot; iterative merge strategy for discovering phrase structure.</Paragraph>
      <Paragraph position="6"> Experiment: Annealing b. We experimented with different annealing schedules for b. The initial value of b, b0, was one of {[?]12,0, 12}. After EM training, b was diminished by 110; this was repeated down to a value of bf = [?]3. Performance after training at each b value is shown in Fig. 3.7 We see that, typically, there is a sharp increase in performance somewhere during training, which typically lessens as b - [?][?]. Starting b too high can also damage performance. This method, then, 7Performance measures are given using a full parser that finds the single best parse of the sentence with the learned parsing parameters. Had we decoded with a vine parser, we would see a precisionarrowsoutheast, recallarrownortheast curve as b decreased. is not robust to the choice of l,b0, or bf, nor does it always do as well as annealing d, although considerable gains are possible; see the fifth column of Table 3.</Paragraph>
      <Paragraph position="7"> By testing models trained with a fixed value of b (for values in [[?]1,1]), we ascertained that the performance improvement is due largely to annealing, not just the injection of segmentation bias (fourth vs. fifth column of Table 3).8</Paragraph>
    </Section>
  </Section>
  <Section position="8" start_page="572" end_page="574" type="metho">
    <SectionTitle>
6 Comparison and Combination with
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="572" end_page="574" type="sub_section">
      <SectionTitle>
Contrastive Estimation
</SectionTitle>
      <Paragraph position="0"> Contrastive estimation (CE) was recently introduced (Smith and Eisner, 2005a) as a class of alternatives to the likelihood objective function locally maximized by EM. CE was found to outperform EM on the task of focus in this paper, when applied to English data (Smith and Eisner, 2005b).</Paragraph>
      <Paragraph position="1"> Here we review the method briefly, show how it performs across languages, and demonstrate that it can be combined effectively with structural bias.</Paragraph>
      <Paragraph position="2"> Contrastive training defines for each example xi a class of presumably poor, but similar, instances called the &amp;quot;neighborhood,&amp;quot; N(xi), and seeks to</Paragraph>
      <Paragraph position="4"> At this point we switch to a log-linear (rather than stochastic) parameterization of the same weighted grammar, for ease of numerical optimization. All this means is that Th (specifically, pstop and pchild in Eq. 1) is now a set of nonnegative weights rather than probabilities.</Paragraph>
      <Paragraph position="5"> Neighborhoods that can be expressed as finite-state lattices built from xi were shown to give significant improvements in dependency parser quality over EM. Performance of CE using two of those neighborhoods on the current model and datasets is shown in Table 2.9 0-mean diagonal Gaussian smoothing was applied, with different variances, and model selection was applied over smoothing conditions and the same initializers as  show only the latter two, which tend to perform best.  selection was applied in all cases, including EM (see the appendix). Boldface marks the best performance overall and trials that this performance did not significantly surpass under a sign test (i.e., p negationslash&lt; 0.05). The score is the F1 measure on non-$ attachments. The fixed d + CE condition was tested only for languages where CE improved over EM. before. Four of the languages have at least one effective CE condition, supporting our previous English results (Smith and Eisner, 2005b), but CE was harmful for Bulgarian and Mandarin. Perhaps better neighborhoods exist for these languages, or there is some ideal neighborhood that would perform well for all languages.</Paragraph>
      <Paragraph position="6"> Our approach of allowing broken trees (SS5) is a natural extension of the CE framework. Contrastive estimation views learning as a process of moving posterior probability mass from (implicit) negative examples to (explicit) positive examples.</Paragraph>
      <Paragraph position="7"> The positive evidence, as in MLE, is taken to be the observed data. As originally proposed, CE allowed a redefinition of the implicit negative evidence from &amp;quot;all other sentences&amp;quot; (as in MLE) to &amp;quot;sentences like xi, but perturbed.&amp;quot; Allowing segmentation of the training sentences redefines the positive and negative evidence. Rather than moving probability mass only to full analyses of the training example xi, we also allow probability mass to go to partial analyses of xi.</Paragraph>
      <Paragraph position="8"> By injecting a bias (d negationslash= 0 or b &gt; [?][?]) among tree hypotheses, however, we have gone beyond the CE framework. We have added features to the tree model (dependency length-sum, number of breaks), whose weights we extrinsically manipulate over time to impose locality bias CN and improve search on CN. Another idea, not explored here, is to change the contents of the neighborhood N over time.</Paragraph>
      <Paragraph position="9"> Experiment: Locality Bias within CE. We combined CE with a fixed-d locality bias for neighborhoods that were successful in the earlier CE experiment, namely DELETEORTRANSPOSE1 for German, English, Turkish, and Portuguese.</Paragraph>
      <Paragraph position="10"> Our results, shown in the seventh column of Table 3, show that, in all cases except Turkish, the combination improves over either technique on its own. We leave exploration of structural annealing with CE to future work.</Paragraph>
      <Paragraph position="11"> Experiment: Segmentation Bias within CE.</Paragraph>
      <Paragraph position="12"> For (language, N) pairs where CE was effective, we trained models using CE with a fixedb segmentation model. Across conditions (b [?] [[?]1,1]), these models performed very badly, hypothesizing extremely local parse trees: typically over 90% of dependencies were length 1 and pointed in the same direction, compared with the 60-70% length-1 rate seen in gold standards. To understand why, consider that the CE goal is to maximize the score of a sentence and all its segmentations while minimizing the scores of neighborhood sentences and their segmentations. An n-gram model can accomplish this, since the same n-grams are present in all segmentations of x, and (some) different n-grams appear in N(x) (for LENGTH and DELETEORTRANSPOSE1). A bigram-like model that favors monotone branching, then, is not a bad choice for a CE learner that must account for segmentations of x and N(x).</Paragraph>
      <Paragraph position="13"> Why doesn't CE without segmentation resort to n-gram-like models? Inspection of models trained using the standard CE method (no segmentation) with transposition-based neighborhoods TRANS-POSE1 and DELETEORTRANSPOSE1 did have high rates of length-1 dependencies, while the poorly-performing DELETE1 models found low length-1 rates. This suggests that a bias toward locality (&amp;quot;n-gram-ness&amp;quot;) is built into the former neighborhoods, and may partly explain why CE works when it does. We achieved a similar locality bias in the likelihood framework when we broadened the hypothesis space, but doing so under CE over-focuses the model on local structures.</Paragraph>
    </Section>
  </Section>
  <Section position="9" start_page="574" end_page="574" type="metho">
    <SectionTitle>
7 Error Analysis
</SectionTitle>
    <Paragraph position="0"> We compared errors made by the selected EM condition with the best overall condition, for each language. We found that the number of corrected attachments always outnumbered the number of new errors by a factor of two or more.</Paragraph>
    <Paragraph position="1"> Further, the new models are not getting better by merely reversing the direction of links made by EM; undirected accuracy also improved significantly under a sign test (p &lt; 10[?]6), across all six languages. While the most common corrections were to nouns, these account for only 25-41% of corrections, indicating that corrections are not &amp;quot;all of the same kind.&amp;quot; Finally, since more than half of corrections in every language involved reattachment to a noun or a verb (content word), we believe the improved models to be getting closer than EM to the deeper semantic relations between words that, ideally, syntactic models should uncover.</Paragraph>
  </Section>
  <Section position="10" start_page="574" end_page="574" type="metho">
    <SectionTitle>
8 Future Work
</SectionTitle>
    <Paragraph position="0"> One weakness of all recent weighted grammar induction work--including Klein and Manning (2004), Smith and Eisner (2005b), and the present paper--is a sensitivity to hyperparameters, including smoothing values, choice of N (for CE), and annealing schedules--not to mention initialization. This is quite observable in the results we have presented. An obstacle for unsupervised learning in general is the need for automatic, efficient methods for model selection. For annealing, inspiration may be drawn from continuation methods; see, e.g., Elidan and Friedman (2005). Ideally one would like to select values simultaneously for many hyperparameters, perhaps using a small annotated corpus (as done here), extrinsic figures of merit on successful learning trajectories, or plausibility criteria (Eisner and Karakos, 2005).</Paragraph>
    <Paragraph position="1"> Grammar induction serves as a tidy example for structural annealing. In future work, we envision that other kinds of structural bias and annealing will be useful in other difficult learning problems where hidden structure is required, including machine translation, where the structure can consist of word correspondences or phrasal or recursive syntax with correspondences. The technique bears some similarity to the estimation methods described by Brown et al. (1993), which started by estimating simple models, using each model to seed the next.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML