File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/p06-1087_metho.xml
Size: 18,869 bytes
Last Modified: 2025-10-06 14:10:17
<?xml version="1.0" standalone="yes"?> <Paper uid="P06-1087"> <Title>Noun Phrase Chunking in Hebrew Influence of Lexical and Morphological Features</Title> <Section position="5" start_page="689" end_page="691" type="metho"> <SectionTitle> 3 Hebrew Simple NP Chunks </SectionTitle> <Paragraph position="0"> The standard definition of English base-NPs is any noun phrase that does not contain another noun phrase, with possessives treated as a special case, viewing the possessive marker as the first word of a new base-NP (Ramshaw and Marcus, 1995). To evaluate the applicability of this definition to Hebrew, we tested this definition on the Hebrew TreeBank (Sima'an et al, 2001) published by the Hebrew Knowledge Center. We extracted all base-NPs from this TreeBank, which is similar in genre and contents to the English one. This results in extremely simple chunks.</Paragraph> <Paragraph position="1"> Table 1 shows the average number of words in a base-NP for English and Hebrew. The Hebrew chunks are basically one-word groups around Nouns, which is not useful for any practical purpose, and so we propose a new definition for Hebrew NP chunks, which allows for some nestedness. We call our chunks Simple NP chunks.</Paragraph> <Section position="1" start_page="689" end_page="690" type="sub_section"> <SectionTitle> 3.1 Syntax of NPs in Hebrew </SectionTitle> <Paragraph position="0"> One of the reasons the traditional base-NP definition fails for the Hebrew TreeBank is related to syntactic features of Hebrew - specifically, smixut (construct state - used to express noun compounds), definite marker and the expression of possessives. These differences are reflected to some extent by the tagging guidelines used to annotate the Hebrew Treebank and they result in trees which are in general less flat than the Penn TreeBank ones.</Paragraph> <Paragraph position="1"> Consider the example base noun phrase [The homeless people]. The Hebrew equivalent is</Paragraph> <Paragraph position="3"> which by the non-recursive NP definition will be bracketed as: , or, loosely translating back to English: [the home]less [people]. In this case, the fact that the bound-morpheme less appears as a separate construct state word with its own definite marker (ha-) in Hebrew would lead the chunker to create two separate NPs for a simple expression. We present below syntactic properties of Hebrew which are relevant to NP chunking. We then present our definition of Simple NP Chunks.</Paragraph> <Paragraph position="4"> Construct State: The Hebrew genitive case is achieved by placing two nouns next to each other. This is called &quot;noun construct&quot;, or smixut. The semantic interpretation of this construct is varied (Netzer and Elhadad, 1998), but it specifically covers possession. The second noun can be treated as an adjective modifying the next noun. The first noun is morphologically marked in a form known as the construct form (denoted by const). The definite article marker is placed on the second word of the construction: misrad ro$ ha-mem$ala Office-[const poss] head-[const] the-government The prime-minister's office Possessive: the smixut form can be used to indicate possession. Other ways to express possession include the possessive marker - '$el' / 'of' - (5), or adding a possessive suffix on the noun (6). The various forms can be mixed together, as in (7): The prime minister office Adjective: Hebrew adjectives come after the noun, and agree with it in number, gender and definite marker: (8) ha-tapu'ah ha-yarok / the-Apple the-green The green apple Some aspects of the predicate structure in Hebrew directly affect the task of NP chunking, as they make the decision to &quot;split&quot; NPs more or less difficult than in English.</Paragraph> <Paragraph position="5"> Word order and the preposition 'et': Hebrew sentences can be either in SVO or VSO form. In order to keep the object separate from the subject, definite direct objects are marked with the special preposition 'et', which has no analog in English.</Paragraph> <Paragraph position="6"> Possible null equative: The equative form in Hebrew can be null. Sentence (9) is a non-null equative, (10) a null equative, while (11) and (12) are predicative NPs, which look very similar to the null-equative form: The big house Morphological Issues: In Hebrew morphology, several lexical units can be concatenated into a single textual unit. Most prepositions, the definite article marker and some conjunctions are concatenated as prefixes, and possessive pronouns and some adverbs are concatenated as suffixes. The Hebrew Treebank is annotated over a segmented version of the text, in which prefixes and suffixes appear as separate lexical units. On the other hand, many bound morphemes in English appear as separate lexical units in Hebrew. For example, the English morphemes re-, ex-, un-, -less, -like, -able, appear in Hebrew as sepa-</Paragraph> <Paragraph position="8"> In our experiment, we use as input to the chunker the text after it has been morphologically disambiguated and segmented. Our analyzer provides segmentation and PoS tags with 92.5% accuracy and full morphology with 88.5% accuracy (Adler and Elhadad, 2006).</Paragraph> </Section> <Section position="2" start_page="690" end_page="691" type="sub_section"> <SectionTitle> 3.2 Defining Simple NPs </SectionTitle> <Paragraph position="0"> Our definition of Simple NPs is pragmatic. We want to tag phrases that are complete in their syntactic structure, avoid the requirement of tagging recursive structures that include full clauses (relative clauses for example) and in general, tag phrases that have a simple denotation. To establish our definition, we start with the most complex NPs, and break them into smaller parts by stating what should not appear inside a Simple NP. This can be summarized by the following Examples for some Simple NP chunks resulting from that definition: 1 Apposition structure is not annotated in the TreeBank. As a heuristic, we consider every comma inside a non conjunctive NP which is not followed by an adjective or an adjective phrase to be marking the beginning of an apposition. 2 As a special case, Adjectival Phrases and possessive conjunctions are considered to be inside the Simple NP. [This phenomenon] was highlighted yesterday at [the labor and welfare committee-const of the Knesset] that dealt with [the topic-const of foreign workers employment-const].</Paragraph> <Paragraph position="1"> [The employers] do not expect to succeed in attracting [a significant number of Israeli workers] for [the fruit-picking] because of [the low salaries] paid for [this work].</Paragraph> <Paragraph position="2"> This definition can also yield some rather long and complex chunks, such as: According to [reports of local government officials], [factories] on [Tartar territory] earned in [the year] that passed [a sum of 3.7 billion Rb (2.2 billion dollars)], which [Moscow] took [almost all]. Note that Simple NPs are split, for example, by the preposition 'on' ([factories] on [Tartar territory]), and by a relative clause ([a sum of 3.7Bn Rb] which [Moscow] took [almost all]).</Paragraph> </Section> <Section position="3" start_page="691" end_page="691" type="sub_section"> <SectionTitle> 3.3 Hebrew Simple NPs are harder </SectionTitle> <Paragraph position="0"> than English base NPs The Simple NPs derived from our definition are highly coherent units, but are also more complex than the non-recursive English base NPs.</Paragraph> <Paragraph position="1"> As can be seen in Table 1, our definition of Simple NP yields chunks which are on average considerably longer than the English chunks, with about 20% of the chunks with 4 or more words (as opposed to about 10% in English) and a significant portion (6.22%) of chunks with 6 or more words (1.67% in english).</Paragraph> <Paragraph position="2"> Moreover, the baseline used at the CoNLL shared task4 (selecting the chunk tag which was most frequently associated with the current PoS)</Paragraph> </Section> </Section> <Section position="6" start_page="691" end_page="692" type="metho"> <SectionTitle> 3 For readers familiar with Hebrew and feel that is </SectionTitle> <Paragraph position="0"> an adjective and should be inside the NP, we note that this is not the case - here is actually a Verb in the Beinoni form and the definite marker is actually used as relative marker.</Paragraph> <Section position="1" start_page="691" end_page="691" type="sub_section"> <SectionTitle> 4.1 Baseline Approaches </SectionTitle> <Paragraph position="0"> We have experimented with different known methods for English NP chunking, which resulted in poor results for Hebrew. We describe here our experiment settings, and provide the best scores obtained for each method, in comparison to the reported scores for English.</Paragraph> <Paragraph position="1"> All tests were done on the corpus derived from the Hebrew Tree Bank. The corpus contains 5,000 sentences, for a total of 120K tokens (agglutinated words) and 27K NP chunks (more details on the corpus appear below). The last 500 sentences were used as the test set, and all the other sentences were used for training. The results were evaluated using the CoNLL shared task evaluation tools 5 . The approaches tested were Error Driven Pruning (EDP) (Cardie and Pierce, 1998) and Transformational Based Learning of IOB tagging (TBL) (Ramshaw and Marcus, 1995).</Paragraph> <Paragraph position="2"> The Error Driven Pruning method does not take into account lexical information and uses only the PoS tags. For the Transformation Based method, we have used both the PoS tag and the word itself, with the same templates as described in (Ramshaw and Marcus, 1995). We tried the Transformational Based method with more features than just the PoS and the word, but obtained lower performance. Our best results for these methods, as well as the CoNLL baseline (BASE), are presented in Table 3. These results confirm that the task of Simple NP chunking is harder in Hebrew than in English.</Paragraph> </Section> <Section position="2" start_page="691" end_page="692" type="sub_section"> <SectionTitle> 4.2 Support Vector Machines </SectionTitle> <Paragraph position="0"> We chose to adopt a tagging perspective for the Simple NP chunking task, in which each word is to be tagged as either B, I or O depending on wether it is in the Beginning, Inside, or Outside of the given chunk, an approach first taken by Ramshaw and Marcus (1995), and which has become the de-facto standard for this task. Using this tagging method, chunking becomes a classification problem - each token is predicted as being either I, O or B, given features from a predefined linguistic context (such as the words surrounding the given word, and their PoS tags).</Paragraph> <Paragraph position="1"> One model that allows for this prediction is Support Vector Machines - SVM (Vapnik, 1995). SVM is a supervised machine learning algorithm which can handle gracefully a large set of overlapping features. SVMs learn binary classifiers, but the method can be extended to multi-class classification (Allwein et al., 2000; Kudo and Matsumoto, 2000).</Paragraph> <Paragraph position="2"> SVMs have been successfully applied to many NLP tasks since (Joachims, 1998), and specifically for base phrase chunking (Kudo and Matsumoto, 2000; 2003). It was also successfully used in Arabic (Diab et al., 2004).</Paragraph> <Paragraph position="3"> The traditional setting of SVM for chunking uses for the context of the token to be classified a window of two tokens around the word, and the features are the PoS tags and lexical items (word forms) of all the tokens in the context. Some settings (Kudo and Matsumoto, 2000) also include the IOB tags of the two &quot;previously tagged&quot; tokens as features (see Fig. 1).</Paragraph> <Paragraph position="4"> This setting (including the last 2 IOB tags) performs nicely for the case of Hebrew Simple NPs chunking as well.</Paragraph> <Paragraph position="5"> Linguistic features are mapped to SVM feature vectors by translating each feature such as &quot;PoS at location n-2 is NOUN&quot; or &quot;word at loca-tion n+1 is DOG&quot; to a unique vector entry, and setting this entry to 1 if the feature occurs, and 0 otherwise. This results in extremely large yet extremely sparse feature vectors.</Paragraph> </Section> <Section position="3" start_page="692" end_page="692" type="sub_section"> <SectionTitle> 4.3 Augmentation of Morphological Features </SectionTitle> <Paragraph position="0"> Hebrew is a morphologically rich language. Recent PoS taggers and morphological analyzers for Hebrew (Adler and Elhadad, 2006) address this issue and provide for each word not only the PoS, but also full morphological features, such as Gender, Number, Person, Construct, Tense, and the affixes' properties. Our system, currently, computes these features with an accuracy of 88.5%.</Paragraph> <Paragraph position="1"> Our original intuition is that the difficulty of Simple NP chunking can be overcome by relying on morphological features in a small context.</Paragraph> <Paragraph position="2"> These features would help the classifier decide on agreement, and split NPs more accurately.</Paragraph> <Paragraph position="3"> Since SVMs can handle large feature sets, we utilize additional morphological features. In particular, we found the combination of the Number and the Construct features to be most effective in improving chunking results. Indeed, our experiments show that introducing morphological features improves chunking quality by as much as 3-point in F-measure when compared with lexical and PoS features only.</Paragraph> </Section> </Section> <Section position="7" start_page="692" end_page="693" type="metho"> <SectionTitle> 5 Experiment </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="692" end_page="693" type="sub_section"> <SectionTitle> 5.1 The Corpus </SectionTitle> <Paragraph position="0"> The Hebrew TreeBank6 consists of 4,995 hand annotated sentences from the Ha'aretz newspaper. Besides the syntactic structure, every word is PoS annotated, and also includes morphological features. The words in the TreeBank are segmented: (instead of ).</Paragraph> <Paragraph position="1"> Our morphological analyzer also provides such segmentation.</Paragraph> <Paragraph position="2"> We derived the Simple NPs structure from the TreeBank using the definition given in Section 3.2. We then converted the original Hebrew TreeBank tagset to the tagset of our PoS tagger. For each token, we specify its word form, its PoS, its morphological features, and its correct IOB tag. The result is the Hebrew Simple NP chunks corpus 7 . The corpus consists of 4,995 sentences, 27,226 chunks and 120,396 segmented tokens. 67,919 of these tokens are covered by NP chunks. A sample annotated sentence is given in Fig. 2.</Paragraph> <Paragraph position="3"> PREPOSITION NA NA N NA N NA N NA NA O DEF_ART NA NA N NA N NA N NA NA B-NP NOUN M S N NA N NA N NA NA I-NP AUXVERB M S N 3 N PAST N NA NA O ADJECTIVE M S N NA N NA N NA NA O ADVERB NA NA N NA N NA N NA NA O VERB NA NA N NA Y TOINF N NA NA O ET_PREP NA NA N NA N NA N NA NA B-NP DEF_ART NA NA N NA N NA N NA NA I-NP NOUN F S N NA N NA N NA NA I-NP . PUNCUATION NA NA N NA N NA N NA NA O</Paragraph> </Section> <Section position="2" start_page="693" end_page="693" type="sub_section"> <SectionTitle> 5.2 Morphological Features: </SectionTitle> <Paragraph position="0"> The PoS tagset we use consists of 22 tags:</Paragraph> </Section> </Section> <Section position="8" start_page="693" end_page="693" type="metho"> <SectionTitle> ADJECTIVE ADVERB ET_PREP AUXVERB CONJUNCTION DEF_ART DETERMINER EXISTENTIAL INTERJECTION INTEROGATIVE MODAL NEGATION PARTICLE NOUN NUMBER PRONOUN PREFIX PREPOSITION UNKNOWN PROPERNAME PUNCTUATION VERB </SectionTitle> <Paragraph position="0"> For each token, we also supply the following morphological features (in that order): dual plural, can be (ALL), (NA) As noted in (Rambow and Habash 2005), one cannot use the same tagset for a Semitic language as for English. The tagset we have derived has been extensively validated through manual tagging by several testers and crosschecked for agreement.</Paragraph> <Section position="1" start_page="693" end_page="693" type="sub_section"> <SectionTitle> 5.3 Setup and Evaluation </SectionTitle> <Paragraph position="0"> For all the SVM chunking experiments, we use the YamCha 8 toolkit (Kudo and Matsumoto, 2003). We use forward moving tagging, using standard SVM with polynomial kernel of degree 2, and C=1. For the multiclass classification, we 8 http://chasen.org/~taku/software/yamcha/ use pairwise voting. For all the reported experiments, we chose the context to be a -2/+2 tokens windows, centered at the current token.</Paragraph> <Paragraph position="1"> We use the standard metrics of accuracy (% of correctly tagged tokens), precision, recall and Fmeasure, with the only exception of normalizing all punctuation tokens from the data prior to evaluation, as the TreeBank is highly inconsistent regarding the bracketing of punctuations, and we don't consider the exclusions/inclusions of punctuations from our chunks to be errors (i.e., &quot;[a book ,] [an apple]&quot; &quot;[a book] , [an apple]&quot; and &quot;[a book] [, an apple]&quot; are all equivalent chunkings in our view).</Paragraph> <Paragraph position="2"> All our development work was done with the first 500 sentences allocated for testing, and the rest for training. For evaluation, we used a 10fold cross-validation scheme, each time with different consecutive 500 sentences serving for testing and the rest for training.</Paragraph> </Section> <Section position="2" start_page="693" end_page="693" type="sub_section"> <SectionTitle> 5.4 Features Used </SectionTitle> <Paragraph position="0"> We run several SVM experiments, each with the settings described in section 5.3, but with a different feature set. In all of the experiments the two previously tagged IOB tags were included in the feature set. In the first experiment (denoted WP) we considered the word and PoS tags of the context tokens to be part of the feature set.</Paragraph> <Paragraph position="1"> In the other experiments, we used different subsets of the morphological features of the tokens to enhance the features set. We found that good results were achieved by using the Number and Construct features together with the word and PoS tags (we denote this WPNC). Bad results were achieved when using all the morphological features together. The usefulness of feature sets was stable across all tests in the ten-fold cross validation scheme.</Paragraph> </Section> </Section> class="xml-element"></Paper>