File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/98/p98-1110_metho.xml

Size: 19,642 bytes

Last Modified: 2025-10-06 14:14:54

<?xml version="1.0" standalone="yes"?>
<Paper uid="P98-1110">
  <Title>Term-list Translation using Mono-lingual Word Co-occurrence Vectors*</Title>
  <Section position="4" start_page="0" end_page="0" type="metho">
    <SectionTitle>
1. Dictionary Lookup:
</SectionTitle>
    <Paragraph position="0"> For each word in the given term-list, all the alternative translations are retrieved from a bilingual dictionary.</Paragraph>
    <Paragraph position="1"> A translation candidate is defined as a combination of one translation for each input word. For example, if the input term-list consists of two words, say wl and w~, and their translation include wll for wl and w23 for w2, then (w11, w23) is a translation candidate. If wl and w~ have two and three alternatives respectively then there are 6 possible translation candidates.</Paragraph>
  </Section>
  <Section position="5" start_page="0" end_page="670" type="metho">
    <SectionTitle>
2. Disambiguation:
</SectionTitle>
    <Paragraph position="0"> In this step, all possible translation candidates are ranked according to a measure that reflects the 'coherence' of each candidate. The top ranked candidate is the translated term-list.</Paragraph>
    <Paragraph position="1">  In the following sections we concentrate on the disambiguation step.</Paragraph>
  </Section>
  <Section position="6" start_page="670" end_page="670" type="metho">
    <SectionTitle>
3 Disambiguation Algorithm
</SectionTitle>
    <Paragraph position="0"> The underlying hypothesis of our disambiguation method is that a plausible combination of translation alternatives will be semantically coherent.</Paragraph>
    <Paragraph position="1"> In order to find the most coherent combination of words, we map words onto points in a multidimensional vector space where the 'proximity' of two vectors represents the level of coherence of the corresponding two words. The coherence of n words can be defined as the order of spatial 'concentration' of the vectors.</Paragraph>
    <Paragraph position="2"> The rest of this section formalizes this idea.</Paragraph>
    <Section position="1" start_page="670" end_page="670" type="sub_section">
      <SectionTitle>
3.1 Co-occurrence Vector Space: WORD
</SectionTitle>
      <Paragraph position="0"/>
    </Section>
  </Section>
  <Section position="7" start_page="670" end_page="670" type="metho">
    <SectionTitle>
SPACE
</SectionTitle>
    <Paragraph position="0"> We employed a multi-dimensional vector space, called WORD SPACE (Schuetze, 1997) for defining the coherence of words. The starting point of WORD SPACE is to represent a word with an n-dimensional vector whose i-th element is how many times the word wi occurs close to the word. For simplicity, we consider w~ and wj to occur close in context if and only if they appear within an m-word distance (i.e., the words occur within a window of m-word length), where m is a predetermined natural number.</Paragraph>
    <Paragraph position="1"> Table 1 shows an artificial example of co-occurrence statistics. The table shows that the word ginko (bank, where people deposit money) co-occurred with shikin (fund) 483 times and with hashi (bridge) 31 times. Thus the co-occurrence vector of ginko (money bank) contains 483 as its 89th element and 31 as its 468th element. In short, a word is mapped onto the row vector of the co-occurrence  Using this word representation, we define the proximity, proz, of two vectors, ~, b, as the cosine of the angle between them, given as follows.</Paragraph>
    <Paragraph position="3"> If two vectors have high proximity then the corresponding two words occur in similar context, and in our terms, are coherent.</Paragraph>
    <Paragraph position="4"> This simple definition, however, has problems, namely its high-dimensionality and sparseness of data. In order to solve these problems, the original co-occurrence vector space is converted into a condensed low dimensional real-valued matrix by using</Paragraph>
  </Section>
  <Section position="8" start_page="670" end_page="671" type="metho">
    <SectionTitle>
SVD (Singular Value Decomposition). For example,
</SectionTitle>
    <Paragraph position="0"> a 20000-by-1000 matrix can be reduced to a 20000-by-100 matrix. The resulting vector space is the WORD SPACE 2</Paragraph>
    <Section position="1" start_page="670" end_page="670" type="sub_section">
      <SectionTitle>
3.2 Coherence of Words
</SectionTitle>
      <Paragraph position="0"> We define the coherence of words in terms of a geometric relationship between the corresponding word vectors.</Paragraph>
      <Paragraph position="1"> As shown above, two vectors with high proximity are coherent with respect to their associative properties. We have extended this notion to n-words. That is, if a group of vectors are concentrated, then the corresponding words are defined to be coherent. Conversely, if vectors are scattered, the corresponding words are in-coherent. In this paper, the concentration of vectors is measured by the average proximity from their centroid vector.</Paragraph>
      <Paragraph position="2"> Formally, for a given word set W, its coherence coh(W) is defined as follows:</Paragraph>
      <Paragraph position="4"/>
    </Section>
    <Section position="2" start_page="670" end_page="670" type="sub_section">
      <SectionTitle>
3.3 Disambiguatlon Procedure
</SectionTitle>
      <Paragraph position="0"> Our disambiguation procedure is simply selecting the combination of translation alternatives that has the largest cob(W) defined above. The current implementation exhaustively calculates the coherence score for each combination of translation alternatives, then selects the combination with the highest score.</Paragraph>
    </Section>
    <Section position="3" start_page="670" end_page="671" type="sub_section">
      <SectionTitle>
3.4 Example
</SectionTitle>
      <Paragraph position="0"> Suppose the given term-list consists of bank and river. Our method first retrieves translation alternatives from the bilingual dictionary. Let the dictionary contain following translations.</Paragraph>
      <Paragraph position="1">  Combining these translation alternatives yields four translation candidates: (ginko, risoku), (ginko, kyoumi), (teibo, risoku), (teibo, kyoumi).</Paragraph>
      <Paragraph position="2"> Then the coherence score is calculated for each candidate.</Paragraph>
      <Paragraph position="3"> Table 2 shows scores calculated with the co-occurrence data used in the translation experiment (see. Section 4.4.2). The combination of ginko (bank:money) and risoku(interest:money) has the highest score. This is consistent with our intuition.</Paragraph>
    </Section>
  </Section>
  <Section position="9" start_page="671" end_page="671" type="metho">
    <SectionTitle>
4 Experiments
</SectionTitle>
    <Paragraph position="0"> We conducted two types of experiments: re-translation experiments and translation experiments. Each experiment includes comparison against the baseline algorithm, which is a unigram-based translation algorithm. This section presents the two types of experiments, plus the baseline algorithm, followed by experimental results.</Paragraph>
    <Section position="1" start_page="671" end_page="671" type="sub_section">
      <SectionTitle>
4.1 Two Types of Experiments
</SectionTitle>
      <Paragraph position="0"> In the translation experiment, term-lists in one language, e.g., English, were translated into another language, e.g., in Japanese. In this experiment, humans judged the correctness of outputs.</Paragraph>
      <Paragraph position="1">  Although the translation experiment recreates real applications, it requires human judgment 3. Thus we decided to conduct another type of experiment, called a re-translation experiment. This experiment translates given term-lists (e.g., in English) into a second language (e.g., Japanese) and maps them back onto the source language (e.g., in this case, English). Thus the correct translation of a term list, in the most strict sense, is the original term-list itself.</Paragraph>
    </Section>
  </Section>
  <Section position="10" start_page="671" end_page="673" type="metho">
    <SectionTitle>
3 If a bilingual parallel corpus is available, then correspond-
</SectionTitle>
    <Paragraph position="0"> ing translations could be used for correct results.</Paragraph>
    <Paragraph position="1"> This experiment uses two bilingual dictionaries: a forward dictionary and a backward dictionary.</Paragraph>
    <Paragraph position="2"> In this experiment, a word in the given term-list (e.g. in English) is first mapped to another language (e.g., Japanese) by using the forward dictionary. Each translated word is then mapped back into original language by referring to the backward dictionary. The union of the translations from the backward dictionary are the translation alternatives to be disambiguated.</Paragraph>
    <Section position="1" start_page="671" end_page="671" type="sub_section">
      <SectionTitle>
4.2 Baseline Algorithm
</SectionTitle>
      <Paragraph position="0"> The baseline algorithm against which our method was compared employs unigram probabilities for disambiguation. For each word in the given term-list, this algorithm chooses the translation alternative with the highest unigram probability in the target language. Note that each word is translated independently. null</Paragraph>
    </Section>
    <Section position="2" start_page="671" end_page="672" type="sub_section">
      <SectionTitle>
4.3 Experimental Data
</SectionTitle>
      <Paragraph position="0"> The source and the target languages of the translation experiments were English and Japanese respectively. The re-translation experiments were conducted for English term-lists using Japanese as the second language.</Paragraph>
      <Paragraph position="1"> The Japanese-to-English dictionary was EDICT(Breen, 1995) and the English-to-Japanese dictionary was an inversion of the Japanese-to-English dictionary.</Paragraph>
      <Paragraph position="2"> The co-occurrence statistics were extracted from the 1994 New York Times (420MB) for English and 1990 Nikkei Shinbun (Japanese newspaper) (150MB) for Japanese. The domains of these texts range from business to sports. Note that 400 articles were randomly separated from the former corpus as the test set.</Paragraph>
      <Paragraph position="3"> The initial size of each co-occurrence matrix was 20000-by-1000, where rows and columns correspond to the 20,000 and 1000 most frequent words in the corpus 4. Each initial matrix was then reduced by using SVD into a matrix of 20000-by-100 using SVD-PACKC(Berry et al., 1993).</Paragraph>
      <Paragraph position="4"> Term-lists for the experiments were automatically generated from texts, where a term-list of a document consists of the topmost n words ranked by their tf-idf scores 5. The relation between the length n of term-list and the disambiguation accuracy was also tested.</Paragraph>
      <Paragraph position="5"> We prepared two test sets of term-lists: those extracted from the 400 articles from the New York Times mentioned above, and those extracted from  where tfwis the occurrence of w in the text, N is the number of documents in the collection, and Nw is the number of documents containing w.</Paragraph>
      <Paragraph position="6">  articles in Reuters(Reuters, 1997), called Test-NYT, and Test-REU, respectively.</Paragraph>
    </Section>
    <Section position="3" start_page="672" end_page="672" type="sub_section">
      <SectionTitle>
4.4 Results
</SectionTitle>
      <Paragraph position="0"> The proposed method was applied to several sets of term-lists of different length. Results are shown in Table 3. In this table and the following tables, &amp;quot;ambiguous&amp;quot; and &amp;quot;success&amp;quot; correspond to the total number of ambiguous words, not term-lists, and the number of words that were successfully translated 6.</Paragraph>
      <Paragraph position="1"> The best results were obtained when the length of term-lists was 4 or 6. In general, the longer a term-list becomes, the more information it has. However, a long term-list tends to be less coherent (i.e., contain different topics). As far as our experiments are concerned, 4 or 6 was the point of compromise.</Paragraph>
      <Paragraph position="2">  Then we compared our method against the base-line algorithm that was trained on the same set of articles used to create the co-occurrence matrix for our algorithm (i.e., New York Times). Both are applied to term-lists of length 6 made from test-NYT.</Paragraph>
      <Paragraph position="3"> The results are shown in Table 4. Although the absolute value of the success rate is not satisfactory, our method significantly outperforms the baseline algorithm.</Paragraph>
      <Paragraph position="4">  We, then, applied the same method with the same parameters (i.e., cooccurence and unigram data) to Test-REU. As shown in Table 5, our method did better than the baseline algorithm although the success rate is lower than the previous result.</Paragraph>
      <Paragraph position="5">  The translation experiment from English to Japanese was carried out on Test-NYT. The training corpus for both proposed and baseline methods was the Nikkei corpus described above. Outputs were compared against the &amp;quot;correct data&amp;quot; which were manually created by removing incorrect alternatives from all possible alternatives. If all the translation alternatives in the bilingual dictionary were judged to be correct, then we counted this word as unambiguous. null The accuracy of our method and baseline algorithm are shown on Table6.</Paragraph>
      <Paragraph position="6"> The accuracy of our method was 80.8%, about 8 points higher than that of the baseline method. This shows our method is effective in improving translation accuracy when syntactic information is not available. In this experiment, 57% of input words were unambiguous. Thus the success rates for entire words were 91.8% (proposed) and 82.6% (baseline).</Paragraph>
    </Section>
    <Section position="4" start_page="672" end_page="673" type="sub_section">
      <SectionTitle>
4.5 Error Analysis
</SectionTitle>
      <Paragraph position="0"> The following are two major failure reasons relevant to our method 7 The first reason is that alternatives were semantically too similar to be discriminated. For example, &amp;quot;share&amp;quot; has at least two Japanese translations: &amp;quot;shea&amp;quot;(market share) and &amp;quot;kabu&amp;quot; (stock ). Both translations frequently occur in the same context in business articles, and moreover these two words sometimes co-occur in the same text. Thus, it is very difficult to discriminate them. In this case, the task is difficult also for humans unless the original text is presented.</Paragraph>
      <Paragraph position="1"> The second reason is more complicated. Some translation alternatives are polysemous in the target language. If a polysemous word has a very general meaning that co-occurs with various words, then this word is more likely to be chosen. This is because the corresponding vector has &amp;quot;average&amp;quot; value for each dimension and, thus, has high proximity with the centroid vector of multiple words.</Paragraph>
      <Paragraph position="2"> For example, alternative translations of &amp;quot;stock ~' includes two words: &amp;quot;kabu&amp;quot; (company share) and &amp;quot;dashz&amp;quot; (liquid used for food). The second translation &amp;quot;dashz&amp;quot; is also a conjugation form of the Japanese verb &amp;quot;dasff', which means &amp;quot;put out&amp;quot; and &amp;quot;start&amp;quot;. In this case, the word, &amp;quot;dash,&amp;quot;, has a cer7Other reasons came from errors in pre-processing including 1) ignoring compound words, 2) incorrect handling of capitalized words etc.</Paragraph>
      <Paragraph position="3">  tain amount of proximity because of the meaning irrelevant to the source word, e.g., stock.</Paragraph>
      <Paragraph position="4"> This problem was pointed out by (Dagan and Itai, 1994) and they suggested two solutions 1) increasing the size of the (mono-lingual) training corpora or 2) using bilingual corpora. Another possible solution is to resolve semantic ambiguities of the training corpora by using a mono-lingual disambiguation algorithm (e.g., (?)) before making the co-occurrence matrix.</Paragraph>
    </Section>
  </Section>
  <Section position="11" start_page="673" end_page="673" type="metho">
    <SectionTitle>
5 Related Work
</SectionTitle>
    <Paragraph position="0"> Dagan and Itai (1994) proposed a method for choosing target words using mono-lingual corpora. It first locates pairs of words in dependency relations (e.g., verb-object, modifier-noun, etc.), then for each pair, it chooses the most plausible combination of translation alternatives. The plausibility of a word-pair is measured by its co-occurence probability estimated from corpora in the target language.</Paragraph>
    <Paragraph position="1"> One major difference is that their method relies on co-occurrence statistics between tightly and locally related (i.e., syntactically dependent) word pairs, whereas ours relies on associative properties of loosely and more globally related (i.e., co-occurring within a certain distance) word groups.</Paragraph>
    <Paragraph position="2"> Although the former statistics could provide more accurate information for disambiguation, it requires huge amounts of data to cover inputs (the data sparseness problem).</Paragraph>
    <Paragraph position="3"> Another difference, which also relates to the data sparseness problem, is that their method uses &amp;quot;row&amp;quot; co-occurrence statistics, whereas ours uses statistics converted with SVD. The converted matrix has the advantage that it represents the co-occurrence relationship between two words that share similar contexts but do not co-occur in the same text s. SVD conversion may, however, weaken co-occurrence relations which actually exist in the corpus.</Paragraph>
    <Paragraph position="4"> Tanaka and Iwasaki (1996) also proposed a method for choosing translations that solely relies on co-occurrence statistics in the target language. The main difference with our approach lies in the plausibility measure of a translation candidate. Instead of using a &amp;quot;coherence score&amp;quot;, their method employs proximity, or inverse distance, between the two co-occurrence matrices: one from the corpus (in the target language) and the other from the translation candidate. The distance measure of two matrices given in the paper is the sum of the absolute distance of each corresponding element. This definition seems to lead the measure to be insensitive to the candidate when the co-occurrence matrix is filled with large numbers.</Paragraph>
    <Paragraph position="5"> s&amp;quot;Second order co-occurrence&amp;quot;. See (Schuetze, 1997)</Paragraph>
  </Section>
  <Section position="12" start_page="673" end_page="673" type="metho">
    <SectionTitle>
6 Concluding Remarks
</SectionTitle>
    <Paragraph position="0"> In this paper, we have presented a method for translating term-lists using mono-lingual corpora.</Paragraph>
    <Paragraph position="1"> The proposed method is evaluated by translation and re-translation experiments and showed a translation accuracy of 82% for term-lists extracted from articles ranging from business to sports.</Paragraph>
    <Paragraph position="2"> We are planning to apply the proposed method to cross-linguistic information retrieval (CLIR). Since the method does not rely on syntactic analysis, it is applicable to translating users' queries as well as translating term-lists extracted from documents.</Paragraph>
    <Paragraph position="3"> A future issue is further evaluation of the proposed method using more data and various criteria including overall performance of an application system (e.g., CLIR).</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML