File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/99/p99-1065_metho.xml
Size: 20,316 bytes
Last Modified: 2025-10-06 14:15:25
<?xml version="1.0" standalone="yes"?> <Paper uid="P99-1065"> <Title>A Statistical Parser for Czech*</Title> <Section position="4" start_page="505" end_page="506" type="metho"> <SectionTitle> 3 A Sketch of the Parsing Model </SectionTitle> <Paragraph position="0"> The parsing model builds on Model 1 of (Collins 97); this section briefly describes the model. The parser uses a lexicalized grammar -- each non-terminal has an associated head-word and part-of-speech (POS). We write non-terminals as X (x): X is the non-terminal label, and x is a (w, t> pair where w is the associated head-word, and t as the POS tag.</Paragraph> <Paragraph position="1"> See figure 1 for an example lexicalized tree, and a list of the lexicalized rules that it contains.</Paragraph> <Paragraph position="2"> Each rule has the form 1 :</Paragraph> <Paragraph position="4"> H is the head-child of the phrase, which inherits the head-word h from its parent P. L1...Ln and R1...Rm are left and right modifiers of H. Either n or m may be zero, and n =</Paragraph> <Paragraph position="6"> The model can be considered to be a variant of Probabilistic Context-Free Grammar (PCFG). In PCFGs each role cr --+ fl in the CFG underlying the PCFG has an associated probability P(/3la ).</Paragraph> <Paragraph position="7"> In (Collins 97), P(/~lo~) is defined as a product of terms, by assuming that the right-hand-side of the rule is generated in three steps: 1. Generate the head constituent label of the phrase, with probability 79H( H I P, h ).</Paragraph> <Paragraph position="8"> 2. Generate modifiers to the left of the head with probability Hi=X..n+l 79L(Li(li) \[ P, h, H), where Ln+l(ln+l) = STOP. The STOP symbol is added to the vocabulary of nonterminals, and the model stops generating left modifiers when it is generated.</Paragraph> <Paragraph position="9"> 3. Generate modifiers to the right of the head with probability Hi=l..m+l PR(Ri(ri) \[ P, h, H). Rm+l (rm+l) is defined as STOP.</Paragraph> <Paragraph position="10"> For example, the probability of s (bought, VBD)</Paragraph> <Paragraph position="12"> Other rules in the tree contribute similar sets of probabilities. The probability for the entire tree is calculated as the product of all these terms.</Paragraph> <Paragraph position="13"> (Collins 97) describes a series of refinements to this basic model: the addition of &quot;distance&quot; (a conditioning feature indicating whether or not a modifier is adjacent to the head); the addition of sub-categorization parameters (Model 2), and parameters that model wh-movement (Model 3); estimation</Paragraph> <Paragraph position="15"> techniques that smooth various levels of back-off (in particular using POS tags as word-classes, allowing the model to learn generalizations about POS classes of words). Search for the highest probability tree for a sentence is achieved using a CKY-style parsing algorithm.</Paragraph> </Section> <Section position="5" start_page="506" end_page="511" type="metho"> <SectionTitle> 4 Parsing the Czech PDT </SectionTitle> <Paragraph position="0"> Many statistical parsing methods developed for English use lexicalized trees as a representation (e.g., (Jelinek et al. 94; Magerman 95; Ratnaparkhi 97;</Paragraph> <Paragraph position="2"> emphasize the use of parameters associated with dependencies between pairs of words. The Czech PDT contains dependency annotations, but no tree structures. For parsing Czech we considered a strategy of converting dependency structures in training data to lexicalized trees, then running the parsing algorithms originally developed for English. A key point is that the mapping from lexicalized trees to dependency structures is many-to-one. As an example, figure 2 shows an input dependency structure, and three different lexicalized trees with this dependency structure.</Paragraph> <Paragraph position="3"> The choice of tree structure is crucial in determining the independence assumptions that the parsing model makes. There are at least 3 degrees of freedom when deciding on the tree structures: . How &quot;fiat&quot; should the trees be? The trees could be as fiat as possible (as in figure 2(a)), or binary branching (as in trees (b) or (c)), or somewhere between these two extremes.</Paragraph> <Paragraph position="4"> 2. What non-terminal labels should the internal nodes have? 3. What set of POS tags should be used?</Paragraph> <Section position="1" start_page="506" end_page="507" type="sub_section"> <SectionTitle> 4.1 A Baseline Approach </SectionTitle> <Paragraph position="0"> To provide a baseline result we implemented what is probably the simplest possible conversion scheme:</Paragraph> <Paragraph position="2"> The trees were as fiat as possible, as in figure 2(a).</Paragraph> <Paragraph position="3"> The non-terminal labels were &quot;XP&quot;, where X is the first letter of the POS tag of the head-word for the constituent. See figure 3 for an example.</Paragraph> <Paragraph position="4"> The part of speech tags were the major category for each word (the first letter of the Czech POS set, which corresponds to broad category distinctions such as verb, noun etc.).</Paragraph> <Paragraph position="5"> The baseline approach gave a result of 71.9% accuracy on the development test set.</Paragraph> <Paragraph position="6"> Input: sentence with part of speech tags: UN saw/V the/D man/N (N=noun, V=verb, D=determiner) dependencies (word ~ Parent): (I =~ saw), (saw =:~ START), (the =~ man), (man =C/, saw> Output: a lexicalized tree</Paragraph> <Paragraph position="8"/> <Paragraph position="10"> labels. Each label is XP, where X is the POS tag for the head-word of the constituent.</Paragraph> <Paragraph position="11"> '4.2 Modifications to the Baseline Trees While the baseline approach is reasonably successful, there are some linguistic phenomena that lead to clear problems. This section describes some tree transformations that are linguistically motivated, and lead to improvements in parsing accuracy.</Paragraph> <Paragraph position="12"> In the PDT the verb is taken to be the head of both sentences and relative clauses. Figure 4 illustrates how the baseline transformation method can lead to parsing errors in relative clause cases. Figure 4(c) shows the solution to the problem: the label of the relative clause is changed to SBAR, and an additional vP level is added to the right of the relative pronoun. Similar transformations were applied for relative clauses involving Wh-PPs (e.g., &quot;the man to whom I gave a book&quot;), Wh-NPs (e.g., &quot;the man whose book I read&quot;) and Wh-Adverbials (e.g., &quot;the place where I live&quot;).</Paragraph> <Paragraph position="13"> The PDT takes the conjunct to be the head of coordination structures (for example, and would be the head of the NP dogs and cats). In these cases the baseline approach gives tree structures such as that in figure 5(a). The non-terminal label for the phrase is JP (because the head of the phrase, the conjunct and, is tagged as J).</Paragraph> <Paragraph position="14"> This choice of non-terminal is problematic for two reasons: (1) the JP label is assigned to all co-ordinated phrases, for example hiding the fact that the constituent in figure 5(a) is an NP; (2) the model assumes that left and right modifiers are generated independently of each other, and as it stands will give unreasonably high probability to two unlike phrases being coordinated. To fix these problems, the non-terminal label in coordination cases was altered to be the same as that of the second conjunct (the phrase directly to the right of the head of the phrase). See figure 5. A similar transformation was made for cases where a comma was the head of a phrase.</Paragraph> <Paragraph position="15"> Figure 6 shows an additional change concerning commas. This change increases the sensitivity of the model to punctuation.</Paragraph> </Section> <Section position="2" start_page="507" end_page="509" type="sub_section"> <SectionTitle> 4.3 Model Alterations </SectionTitle> <Paragraph position="0"> This section describes some modifications to the parameterization of the model.</Paragraph> <Paragraph position="2"> line approach (a) labels the phrase as a Jp; the refinement (b) takes the second conjunct's label as the non-terminal for the whole phrase.</Paragraph> <Paragraph position="4"> guish main clauses from relative clauses: both have a verb as the head, so both are labeled VP. (b) A typical parsing error due to relative and main clauses not being distinguished. (note that two main clauses can be coordinated by a comma, as in John likes Mary, Mary likes Tim). (c) The solution to the problem: a modification to relative clause structures in training data.</Paragraph> <Paragraph position="5"> cross verbs The model of (Collins 97) had conditioning variables that allowed the model to learn a preference for dependencies which do not cross verbs. From the results in table 3, adding this condition improved accuracy by about 0.9% on the development set.</Paragraph> <Paragraph position="6"> The parser of (Collins 96) used punctuation as an indication of phrasal boundaries. It was found that if a constituent Z ~ (...XY...) has two children X and Y separated by a punctuation mark, then Y is generally followed by a punctuation mark or the end of sentence marker. The parsers of (Collins 96,97) encoded this as a hard constraint. In the Czech parser we added a cost of -2.5 (log probability) z to structures that violated this constraint.</Paragraph> <Paragraph position="7"> The model of section 3 made the assumption that modifiers are generated independently of each other. This section describes a bigram model, where the context is increased to consider the previously generated modifier ((Eisner 96) also describes use of bigram statistics). The right-hand-side of a rule is now assumed to be generated in the following three step process: 1. Generate the head label, with probability ~'~ (H I P, h) 2. Generate left modifiers with probability</Paragraph> <Paragraph position="9"> where L0 is defined as a special NULL symbol. Thus the previous modifier, Li-1, is added to the conditioning context (in the previous model the left modifiers had probability</Paragraph> <Paragraph position="11"> 1. main part of 8. person speech 2. detailed part of 9. tense speech 3. gender 10. degree of comparison null 4. number I I. negativeness 5. case 12. voice 6. possessor's 13. variant/register gender 7. possessor's number null</Paragraph> </Section> <Section position="3" start_page="509" end_page="511" type="sub_section"> <SectionTitle> 4.4 Alternative Part-of-Speech Tagsets </SectionTitle> <Paragraph position="0"> Part of speech (POS) tags serve an important role in statistical parsing by providing the model with a level of generalization as to how classes of words tend to behave, what roles they play in sentences, and what other classes they tend to combine with.</Paragraph> <Paragraph position="1"> Statistical parsers of English typically make use of the roughly 50 POS tags used in the Penn Treebank corpus, but the Czech PDT corpus provides a much richer set of POS tags, with over 3000 possible tags defined by the tagging system and over 1000 tags actually found in the corpus. Using that large a tagset with a training corpus of only 19,000 sentences would lead to serious sparse data problems.</Paragraph> <Paragraph position="2"> It is also clear that some of the distinctions being made by the tags are more important than others for parsing. We therefore explored different ways of extracting smaller but still maximally informative POS tagsets.</Paragraph> <Paragraph position="3"> 4.4.1 Description of the Czech Tagset The POS tags in the Czech PDT corpus (Haji~ and Hladk~i, 1997) are encoded in 13-character strings.</Paragraph> <Paragraph position="4"> Table 1 shows the role of each character. For example, the tag NNMP1 ..... A-- would be used for a word that had &quot;noun&quot; as both its main and detailed part of speech, that was masculine, plural, nominative (case 1), and whose negativeness value was &quot;affirmative&quot;. null Within the corpus, each word was annotated with all of the POS tags that would be possible given its spelling, using the output of a morphological analysis program, and also with the single one of those tags that a statistical POS tagging program had predicted to be the correct tag (Haji~ and Hladka, 1998). Table 2 shows a phrase from the corpus, with of the Parliament approved&quot;.</Paragraph> <Paragraph position="5"> the alternative possible tags and machine-selected tag for each word. In the training portion of the corpus, the correct tag as judged by human annotators was also provided.</Paragraph> <Paragraph position="6"> In the baseline approach, the first letter, or &quot;main part of speech&quot;, of the full POS strings was used as the tag. This resulted in a tagset with 13 possible values.</Paragraph> <Paragraph position="7"> A number of alternative, richer tagsets were explored, using various combinations of character positions from the tag string. The most successful alternative was a two-letter tag whose first letter was always the main POS, and whose second letter was the case field if the main POS was one that displays case, while otherwise the second letter was the detailed POS. (The detailed POS was used for the main POS values D, J, V, and X; the case field was used for the other possible main POS values.) This two-letter scheme resulted in 58 tags, and provided about a 1.1% parsing improvement over the baseline on the development set.</Paragraph> <Paragraph position="8"> Even richer tagsets that also included the person, gender, and number values were tested without yielding any further improvement, presumably because the damage from sparse data outweighed the value of the additional information present.</Paragraph> <Paragraph position="9"> An entirely different approach, rather than searching by hand for effective tagsets, would be to use clustering to derive them automatically. We explored two different methods, bottom-up and topdown, for automatically deriving POS tag sets based on counts of governing and dependent tags extracted from the parse trees that the parser constructs from the training data. Neither tested approach resulted in any improvement in parsing performance com- null pared to the hand-designed &quot;two letter&quot; tagset, but the implementations of each were still only preliminary, and a clustered tagset more adroitly derived might do better.</Paragraph> <Paragraph position="10"> One final issue regarding POS tags was how to deal with the ambiguity between possible tags, both in training and test. In the training data, there was a choice between using the output of the POS tagger or the human annotator's judgment as to the correct tag. In test data, the correct answer was not available, but the POS tagger output could be used if desired. This turns out to matter only for unknown words, as the parser is designed to do its own tagging, for words that it has seen in training at least 5 times, ignoring any tag supplied with the input. For &quot;unknown&quot; words (seen less than 5 times), the parser can be set either to believe the tag supplied by the POS tagger or to allow equally any of the dictionary-derived possible tags for the word, effectively allowing the parse context to make the choice. (Note that the rich inflectional morphology of Czech leads to a higher rate of&quot;unknown&quot; word forms than would be true in English; in one test, 29.5% of the words in test data were &quot;unknown&quot;.) Our tests indicated that if unknown words are treated by believing the POS tagger's suggestion, then scores are better if the parser is also trained on the POS tagger's suggestions, rather than on the human annotator's correct tags. Training on the correct tags results in 1% worse performance. Even though the POS tagger's tags are less accurate, they are more like what the parser will be using in the test data, and that turns out to be the key point. On the other hand, if the parser allows all possible dictionary tags for unknown words in test material, then it pays to train on the actual correct tags.</Paragraph> <Paragraph position="11"> In initial tests, this combination of training on the correct tags and allowing all dictionary tags for unknown test words somewhat outperformed the alternative of using the POS tagger's predictions both for training and for unknown test words. When tested with the final version of the parser on the full development set, those two strategies performed at the same level.</Paragraph> <Paragraph position="12"> * 5 Results We ran three versions of the parser over the final test set: the baseline version, the full model with all additions, and the full model with everything but the bigram model. The baseline system on the fi- null that although the Science section only contributes 25% of the sentences in test data, it contains much longer sentences than the other sections and therefore accounts for 38% of the dependencies in test data.</Paragraph> <Paragraph position="13"> nal test set achieved 72.3% accuracy. The final system achieved 80.0% accuracy 3: a 7.7% absolute improvement and a 27.8% relative improvement.</Paragraph> <Paragraph position="14"> The development set showed very similar results: a baseline accuracy of 71.9% and a final accuracy of 79.3%. Table 3 shows the relative improvement of each component of the model 4. Table 4 shows the results on the development set by genre. It is interesting to see that the performance on newswire text is over 2% better than the averaged performance.</Paragraph> <Paragraph position="15"> The Science section of the development set is considerably harder to parse (presumably because of longer sentences and more open vocabulary).</Paragraph> <Paragraph position="16"> cause the search space becomes too large. The baseline system missed 5 sentences; the full system missed 21 sentences; the full system minus bigrams missed 2 sentences. To score the full system we took the output from the full system minus bi-grams when the full system produced no output (to prevent a heavy penalty due to the 21 missed sentences). The remaining 2 unparsed sentences (5 in the baseline case) had all dependencies attached to the root.</Paragraph> <Paragraph position="17"> 4We were surprised to see this slight drop in accuracy for the punctuation tree modification. Earlier tests on a different development set, with less training data and fewer other model alterations had shown a good improvement for this feature.</Paragraph> </Section> <Section position="4" start_page="511" end_page="511" type="sub_section"> <SectionTitle> 5.1 Comparison to Previous Results </SectionTitle> <Paragraph position="0"> The main piece of previous work on parsing Czech that we are aware of is described in (Kubofi 99).</Paragraph> <Paragraph position="1"> This is a rule-based system which is based on a manually designed set of rules. The system's accuracy is not evaluated on a test corpus, so it is difficult to compare our results to theirs. We can, however, make some comparison of the results in this paper to those on parsing English. (Collins 99) describes results of 91% accuracy in recovering dependencies on section 0 of the Penn Wall Street Journal Treebank, using Model 2 of (Collins 97). This task is almost certainly easier for a number of reasons: there was more training data (40,000 sentences as opposed to 19,000); Wall Street Journal may be an easier domain than the PDT, as a reasonable proportion of sentences come from a sub-domain, financial news, which is relatively restricted. Unlike model 1, model 2 of the parser takes subcategorization information into account, which gives some improvement on English and might well also improve results on Czech. Given these differences, it is difficult to make a direct comparison, but the overall conclusion seems to be that the Czech accuracy is approaching results on English, although it is still somewhat behind.</Paragraph> </Section> </Section> class="xml-element"></Paper>