File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/p06-2123_metho.xml
Size: 21,118 bytes
Last Modified: 2025-10-06 14:10:28
<?xml version="1.0" standalone="yes"?> <Paper uid="P06-2123"> <Title>Segmentation</Title> <Section position="3" start_page="961" end_page="963" type="metho"> <SectionTitle> 2 Chinese word segmentation framework </SectionTitle> <Paragraph position="0"> Our word segmentation process is illustrated in Fig. 1. It is composed of three parts: a dictionary-based N-gram word segmentation for segmenting IV words, a maximum entropy subword-based tagger for recognizing OOVs, and a confidence-dependent word disambiguation used for merging the results of both the dictionary-based and the IOB-taggingbased. An example exhibiting each step's results is also given in the figure.</Paragraph> <Section position="1" start_page="961" end_page="962" type="sub_section"> <SectionTitle> 2.1 Dictionary-based N-gram word </SectionTitle> <Paragraph position="0"> segmentation This approach can achieve a very high R-iv, but no OOV detection. We combined with it the N-gram language model (LM) to solve segmentation ambiguities. For a given Chinese character sequence, C = c0c1c2 ...cN, the problem of word segmentation can be formalized as finding a word sequence,</Paragraph> <Paragraph position="2"> We applied Bayes' law in the above derivation.</Paragraph> <Paragraph position="3"> Because the word sequence must keep consistent with the character sequence, P(C|W) is expanded to be a multiplication of a Kronecker delta function series, d(u,v), equal to 1 if both arguments are the same and 0 otherwise. P(wt0wt1 ...wtM ) is a language model that can be expanded by the chain rule.</Paragraph> <Paragraph position="4"> If trigram LMs are used, we have</Paragraph> <Paragraph position="6"> where wi is a shorthand for wti .</Paragraph> <Paragraph position="7"> Equation 1 indicates the process of dictionary-based word segmentation. We looked up the lexicon to find all the IVs, and evaluated the word sequences by the LMs. We used a beam search (Jelinek, 1998) instead of a viterbi search to decode the best word sequence because we found that a beam search can speed up the decoding. N-gram LMs were used to score all the hypotheses, of which the one with the highest LM scores is the final output. The experimental results are presented in Section 3.1, where we show the comparative results as we changed the order of LMs.</Paragraph> </Section> <Section position="2" start_page="962" end_page="963" type="sub_section"> <SectionTitle> 2.2 Subword-based IOB tagging </SectionTitle> <Paragraph position="0"> There are several steps to train a subword-based IOB tagger. First, we extracted a word list from the training data sorted in decreasing order by their counts in the training data. We chose all the single characters and the top multi-character words as a lexicon subset for the IOB tagging. If the subset consists of Chinese characters only, it is a character-based IOB tagger. We regard the words in the subset as the subwords for the IOB tagging.</Paragraph> <Paragraph position="1"> Second, we re-segmented the words in the training data into subwords of the subset, and assigned IOB tags to them. For the character-based IOB tagger, there is only one possibility for re-segmentation. However, there are multiple choices for the subword-based IOB`tagger.</Paragraph> <Paragraph position="2"> For example, &quot;</Paragraph> <Paragraph position="4"> g(city)/I.&quot; In this work we used forward maximal match (FMM) for disambiguation.</Paragraph> <Paragraph position="5"> Because we carried out FMMs on each words in the manually segmented training data, the accuracy of FMM was much higher than applying it on whole sentences. Of course, backward maximal match (BMM) or other approaches are also applicable. We did not conduct comparative experiments due to trivial differences in the results of these approaches. In the third step, we used the maximum entropy (MaxEnt) approach (the results of CRF are given in Section 3.4) to train the IOB tagger (Xue and Shen, 2003). The mathematical expression for the MaxEnt model is</Paragraph> <Paragraph position="7"> where t is a tag, &quot;I,O,B,&quot; of the current word; h, the context surrounding the current word, including word and tag sequences; fi, a binary feature equal to 1 if the i-th defined feature is activated and 0 otherwise; Z, a normalization coefficient; and li, the weight of the i-th feature.</Paragraph> <Paragraph position="8"> Many kinds of features can be defined for improving the tagging accuracy. However, to conform to the constraints of closed test in Bakeoff 2005, some features, such as syntactic information and character encodings for numbers and alphabetical characters, are not allowed. Therefore, we used the features available only from the provided training corpus.</Paragraph> <Paragraph position="10"> where w stands for word and t, for IOB tag.</Paragraph> <Paragraph position="11"> The subscripts are position indicators, where 0 means the current word/tag; [?]1,[?]2, the first or second word/tag to the left; 1,2, the first or second word/tag to the right.</Paragraph> <Paragraph position="12"> * Prefixes and suffixes. These are very useful features. Using the same approach as in (Tseng et al., 2005), we extracted the most frequent words tagged with &quot;B&quot;, indicating a prefix, and the last words tagged with &quot;I&quot;, denoting a suffix. Features containing prefixes and suffixes were used in the following combinations with other features, where p stands for prefix; s, suffix; p0 means the current word is a prefix and s1 denotes that the right first word is a suffix, and so on.</Paragraph> <Paragraph position="13"> p0,w0 p[?]1,w0 p1,s0,w0s[?]1,w0s1, p0w[?]1, p0w1,s0w[?]1,s0w[?]2 * Word length. This is defined as the number of characters in a word. The length of a Chinese word has discriminative roles for word composition. For example, single-character words are more apt to form new words than are multiple-character words. Features using word length are listed below, where l0 means the word length of the current word. Others can be inferred similarly.</Paragraph> <Paragraph position="14"> l0,w0l[?]1,w0l1,w0l[?]1l1,l0l[?]1,l0l1 As to feature selection, we simply adopted the absolute count for each feature in the training data as the metric, and defined a cutoff value for each feature type.</Paragraph> <Paragraph position="15"> We used IIS to train the maximum entropy model. For details, refer to (Lafferty et al., 2001). The tagging algorithm is based on the beam-search method (Jelinek, 1998). After the IOB tagging, each word is tagged with a B/I/O tag. The word segmentation is obtained immediately. The experimental effect of the word-based tagger and its comparison with the character-based tagger are made in section 3.2.</Paragraph> </Section> <Section position="3" start_page="963" end_page="963" type="sub_section"> <SectionTitle> 2.3 Confidence-dependent word segmentation </SectionTitle> <Paragraph position="0"> In the last two steps we produced two segmentation results: the one by the dictionary-based approach and the one by the IOB tagging. However, neither was perfect. The dictionary-based segmentation produced a result with a higher R-iv but lower R-oov while the IOB tagging yielded the contrary results. In this section we introduce a confidence measure approach to combine the two results. We define a confidence measure, CM(tiob|w), to measure the confidence of the results produced by the IOB tagging by using the results from the dictionary-based segmentation. The confidence measure comes from two sources: IOB tagging and dictionary-based word segmentation. Its calculation is defined as:</Paragraph> <Paragraph position="2"> where tiob is the word w's IOB tag assigned by the IOB tagging; tw, a prior IOB tag determined by the results of the dictionary-based segmentation. After the dictionary-based word segmentation, the words are re-segmented into subwords by FMM before being fed to IOB tagging. Each subword is given a prior IOB tag, tw. CMiob(t|w), a confidence probability derived in the process of IOB tagging, which is defined as</Paragraph> <Paragraph position="4"> where hi is a hypothesis in the beam search.</Paragraph> <Paragraph position="5"> d(tw,tiob)ng denotes the contribution of the dictionary-based segmentation.</Paragraph> <Paragraph position="6"> d(tw,tiob)ng is a Kronecker delta function defined as d(tw,tiob)ng ={ 1 if tw = tiob0 otherwise In Eq. 3, a is a weighting between the IOB tagging and the dictionary-based word segmentation. We found an empirical value 0.8 for a.</Paragraph> <Paragraph position="7"> By Eq. 3 the results of IOB tagging were reevaluated. A confidence measure threshold, t, was defined for making a decision based on the value. If the value was lower than t, the IOB tag was rejected and the dictionary-based segmentation was used; otherwise, the IOB tagging segmentation was used. A new OOV was thus created. For the two extreme cases, t = 0 is the case of the IOB tagging while t = 1 is that of the dictionary-based approach. In Section 3.3 we will present the experimental segmentation results of the confidence measure approach. In a real application, we can actually change the confidence threshold to obtain a satisfactory balance between R-iv and R-oov.</Paragraph> <Paragraph position="8"> An example is shown in Figure 1. In the stage of IOB tagging, a confidence is attached to each word. In the stage of confidence-based, a new confidence was made after merging with dictionary-based results where all single-character words are labeled as &quot;O&quot; by default except &quot;Beijing-city&quot; labeled as &quot;Beijing/B&quot; and &quot;city/I&quot;.</Paragraph> </Section> </Section> <Section position="4" start_page="963" end_page="966" type="metho"> <SectionTitle> 3 Experiments </SectionTitle> <Paragraph position="0"> We used the data provided by Sighan Bakeoff 2005 to test our approaches described in the previous sections. The data contain four corpora from different sources: Academia sinica, City University of Hong Kong, Peking University and Microsoft Research (Beijing). The statistics concerning the corpora is listed in Table 3. The corpora provided both unicode coding and Big5/GB coding. We used the Big5 and CP936 encodings. Since the main purpose of this work is to evaluate the proposed subword-based IOB tagging, we carried out the closed test only. Five metrics were used to evaluate the segmentation results: recall (R), precision (P), F-score (F), OOV rate (R-oov) and IV rate (R-iv). For a detailed explanation of these metrics, refer to (Sproat and Emerson, 2003).</Paragraph> <Section position="1" start_page="964" end_page="964" type="sub_section"> <SectionTitle> 3.1 Effects of N-gram LMs </SectionTitle> <Paragraph position="0"> We obtained a word list from the training data as the vocabulary for dictionary-based segmentation. N-gram LMs were generated using the SRI LM toolkit.</Paragraph> <Paragraph position="1"> Table 2 shows the performance of N-gram segmentation by changing the order of N-grams.</Paragraph> <Paragraph position="2"> We found that bigram LMs can improve segmentation over unigram, though we observed no effect from the trigram LMs. For the PKU corpus, there was a relatively strong improvement due to using bi-grams rather than unigrams, posssibly because the PKU corpus' training size was smaller than the others. For a sufficiently large training corpus, the unigram LMs may be enough for segmentation. This experiment revealed that language models above bi-grams do not improve word segmentation. Since there were some single-character words present in test data but not in the training data, the R-oov rates were not zero in this experiment. In fact, we did not use any OOV detection for the dictionary-based approach. null</Paragraph> </Section> <Section position="2" start_page="964" end_page="964" type="sub_section"> <SectionTitle> 3.2 Comparisons of Character-based and </SectionTitle> <Paragraph position="0"> Subword-based tagger In Section 2.2 we described the character-based and subword-based IOB tagging methods. The main difference between the two is the lexicon subset used for re-segmentation. For the subword-based IOB tagging, we need to add some multiple-character words into the lexicon subset. Since it is hard to decide the optimal number of words to add, we test three different lexicon sizes, as shown in Table 3.</Paragraph> <Paragraph position="1"> The first one, s1, consisting of all the characters, is a character-based approach. The second, s2, added 2,500 top words from the training data to the lexicon of s1. The third, s3, added another 2,500 top words to the lexicon of s2. All the words were among the most frequent in the training corpora. After choosing the subwords, the training data were re-segmented using the subwords by FMM. The final based tagging. s1 contains all the characters. s2 and s3 contains some common words.</Paragraph> <Paragraph position="2"> lexicons were collected again, consisting of single-character words and multiple-character words. Table 3 shows the sizes of the final lexicons. Therefore, the minus of the lexicon size of s2 to s1 are not 2,500, exactly.</Paragraph> <Paragraph position="3"> The segmentation results of using three lexicons are shown in Table 4. The numbers are separated by a &quot;/&quot; in the sequence of &quot;s1/s2/s3.&quot; We found although the subword-based approach outperformed the character-based one significantly, there was no obvious difference between the two subword-based approaches, s2 and s3, adding respective 2,500 and 5,000 subwords to s1. The experiments show that we cannot find an optimal lexicon size from 2,500 to 5,000. However, there might be an optimal point less than 2,500. We did not take much effort to find the optimal point, and regarded 2,500 as an acceptable size for practical usages.</Paragraph> <Paragraph position="4"> The F-scores of IOB tagging shown in Table 4 are better than that of N-gram word segmentation in Table 2, which proves that the IOB tagging is effective in recognizing OOV. However, we found there was a large decrease in the R-ivs, which shows the weakness of the IOB tagging approach. We use the confidence measure approach to deal with this problem in next section.</Paragraph> </Section> <Section position="3" start_page="964" end_page="965" type="sub_section"> <SectionTitle> 3.3 Effects of the confidence measure </SectionTitle> <Paragraph position="0"> Up to now we had two segmentation results by using the dictionary-based word segmentation and the IOB tagging. In Section 2.3, we proposed a confidence measure approach to re-evaluate the results of IOB tagging by combining the two results. The effects of the confidence measure are shown in Table 5, where we used a = 0.8 and confidence threshold t = 0.7. These are empirical numbers. We obtained the optimal values by multiple trials on held-out data. The numbers in the slots of Table 5 are divided by a separator &quot;/&quot; and displayed as the sequence &quot;s1/s2/s3&quot;, just as Table 4. We found that the results in Table 5 were better than those in Table 4 and Table 2, which proved that using the confidence measure approach yielded the best performance over the N-gram segmentation and the IOB tagging approaches.</Paragraph> <Paragraph position="1"> Even with the use of the confidence measure, the subword-based IOB tagging still outperformed the character-based IOB tagging, proving that the proposed subword-based IOB tagging was very effective. Though the improvement under the confidence measure was decreasing, it was still significant.</Paragraph> <Paragraph position="2"> We can change the R-oov and R-iv by changing the confidence threshold. The effect of R-oov and Riv's varing as the threshold is shown in Fig. 2, where R-oovs and R-ivs are moving in different directions.</Paragraph> <Paragraph position="3"> When the confidence threshold t = 0, the case for the IOB tagging, R-oovs are maximal. When t = 1, representing the dictionary-based segmentation, R-oovs are the minimal. The R-oovs and R-ivs varied largely at the start and end point but little around the middle section.</Paragraph> </Section> <Section position="4" start_page="965" end_page="966" type="sub_section"> <SectionTitle> 3.4 Subword-based tagging by CRFs </SectionTitle> <Paragraph position="0"> Our proposed approaches were presented and evaluated using the MaxEnt method in the previous sections. When we turned to CRF-based tagging, we found a same effect as the MaxEnt method.</Paragraph> <Paragraph position="1"> Our subword-based tagging by CRFs was implemented by the package &quot;CRF++&quot; from the site &quot;http://www.chasen.org/~taku/software.&quot; We repeated the previous sections' experiments using the CRF approach except that we did one of the two subword-based tagging, the lexicon size s3.</Paragraph> <Paragraph position="2"> The same values of the confidence measure threshold and a were used. The results are shown in Table 6.</Paragraph> <Paragraph position="3"> We found that the results using the CRFs were much better than those of the MaxEnts. However, the emphasis here was not to compare CRFs and MaxEnts but the effect of subword-based IOB tagging. In Table 6, the results before &quot;/&quot; are the character-based IOB tagging and after &quot;/&quot;, the subword-based. It was clear that the subword-based approaches yielded better results than the character-based approach though the improvement was not as higher as that of the MaxEnt approaches. There was separator &quot;/&quot; divides the results of s1, s2, and s3. no change on F-score for AS corpus, but a better recall rate was found. Our results are better than the best one of Bakeoff 2005 in PKU, CITYU and MSR corpora.</Paragraph> <Paragraph position="4"> Detailed descriptions about subword tagging by CRF can be found in our paper (Zhang et al., 2006).</Paragraph> </Section> </Section> <Section position="5" start_page="966" end_page="966" type="metho"> <SectionTitle> 4 Discussion and Related works </SectionTitle> <Paragraph position="0"> The IOB tagging approach adopted in this work is not a new idea. It was first implemented in Chinese word segmentation by (Xue and Shen, 2003) using the maximum entropy methods. Later, (Peng and McCallum, 2004) implemented the idea using the CRF-based approach, which yielded better results than the maximum entropy approach because it could solve the label bias problem (Lafferty et al., 2001). However, as we mentioned before, this approach does not take advantage of the prior knowledge of in-vocabulary words; It produced a higher R-oov but a lower R-iv. This problem has been observed by some participants in the Bakeoff 2005 (Asahara et al., 2005), where they applied the IOB tagging to recognize OOVs, and added the OOVs to the lexicon used in the HMM-based or CRF-based approaches. (Nakagawa, 2004) used hybrid HMM models to integrate word level and character level information seamlessly. We used confidence measure to determine a better balance between R-oov and R-iv. The idea of using the confidence measure has appeared in (Peng and McCallum, 2004), where it was used to recognize the OOVs. In this work we used it more than that. By way of the confidence measure we combined results from the dictionary-based and the IOBtagging-based and as a result, we could achieve the optimal performance.</Paragraph> <Paragraph position="1"> Our main contribution is to extend the IOB tagging approach from being a character-based to a subword-based one. We proved that the new approach enhanced the word segmentation significantly in all the experiments, MaxEnts, CRFs and using confidence measure. We tested our approach using the standard Sighan Bakeoff 2005 data set in the closed test. In Table 7 we align our results with some top runners' in the Bakeoff 2005.</Paragraph> <Paragraph position="2"> Our results were compared with the best performers' results in the Bakeoff 2005. Two participants' results were chosen as bases: No.15-b, ranked the first in the AS corpus, and No.14, the best performer in CITYU, MSR and PKU. . The No.14 used CRF-modeled IOB tagging while No.15-b used MaxEnt-modeled IOB tagging. Our results produced by the MaxEnt are denoted as &quot;ours(ME)&quot; while &quot;ours(CRF)&quot; for the CRF approaches. We achieved the highest F-scores in three corpora except the AS corpus. We think the proposed subword-based approach played the important role for the achieved good results.</Paragraph> <Paragraph position="3"> A second advantage of the subword-based IOB tagging over the character-based is its speed. The subword-based approach is faster because fewer words than characters needed to be labeled. We observed a speed increase in both training and testing. In the training stage, the subword approach was almost two times faster than the character-based.</Paragraph> </Section> class="xml-element"></Paper>