File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/91/p91-1024_metho.xml
Size: 28,770 bytes
Last Modified: 2025-10-06 14:12:47
<?xml version="1.0" standalone="yes"?> <Paper uid="P91-1024"> <Title>EXPERIMENTS AND PROSPECTS OF EXAMPLE-BASED MACHINE TRANSLATION</Title> <Section position="4" start_page="0" end_page="0" type="metho"> <SectionTitle> RBMT. 1 INTRODUCTION </SectionTitle> <Paragraph position="0"> Machine Translation requires handcmt~ and complicated large-scale knowledge (Nirenburg 1987).</Paragraph> <Paragraph position="1"> Conventional machine translation systems use rules as the knowledge. This framework is called Rule-Based Machine Translation (RBMT). It is difficult to scale up from a toy program to a practical system because of the problem of building such a lurge-scale rule-base. It is also difficult to improve translation performance because the effect of adding a new rule is hard to anticipate, and because translation using a large-scule rule-based system is time-consuming. Moreover, it is difficult to make use of situational or domain-specific information for translation.</Paragraph> <Paragraph position="2"> their translations) has been implemented as the knowledge (Nagao 1984; Sumita and Tsutsumi 1988; Sato and Nagao 1989; Sadler 1989a; Sumita et al. 1990a, b). The translation mechanism retrieves similar examples from the database, adapting the examples to Wanslate the new source text. This framework is called Example-Based Machine Translation (EBMT).</Paragraph> <Paragraph position="3"> This paper focuses on ATR's linguistic database of spoken Japanese with English translations. The corpus contains conversations about international conference registration (Ogura et al. 1989). Results of this study indicate that EBMT is a breakthrough in MT technology.</Paragraph> <Paragraph position="4"> Our pilot EBMT system translates Japanese noun phrases of the form '~1 x no N2&quot; into English noun phrases. About a 78% success rate on average has been achieved in the experiment, which i s considered to outperform RBMT. This rate cm be improved as discussed below.</Paragraph> <Paragraph position="5"> Section 2 explains the basic idea of EBMT.</Paragraph> <Paragraph position="6"> Section 3 discusses the broad applicability of EBMT and the advantages of integrating it with RBMT.</Paragraph> <Paragraph position="7"> Sections 4 and 5 give a rationale for section 3, i.e., section 4 illustrates the experiment of translating noun phrases of the form &quot;Nt no N2&quot; in detail, and section 5 studies other phenomena through actual dam from our corpus. Section 6 concludes this paper with detailed comparisons between RBMT and EBMT.</Paragraph> </Section> <Section position="5" start_page="0" end_page="185" type="metho"> <SectionTitle> 2 BASIC IDEA OF EBMT 2.1 BASIC FLOW </SectionTitle> <Paragraph position="0"> In this section, the basic idea of EBMT, which is general and applicable to many phenomena dealt with by machine translation, is shown.</Paragraph> <Paragraph position="1"> In order to conquer these problems in machine translation, a database of examples (pairs of source phrases, sentences, or texts and * Currently with Kyoto University Figure 1 shows the basic flow of EBMT using translation of &quot;kireru&quot;\[cut/be sharp\]. From here on, the literal English translations are bracketed.</Paragraph> <Paragraph position="2"> (1) and (2) me examples (pairs of Japanese sentences and their English translations) in the database.</Paragraph> <Paragraph position="3"> Examples similar to the Japanese input sentence are retrieved in the following manner. Syntactically, the input is similar to Japanese sentences (1) and (2). However, semantically, &quot;kachou&quot; \[chief\] is far from &quot;houchou&quot; \[kitchen knife\]. But, &quot;kachou&quot; \[chief\] is semantically similar to &quot;kanojo&quot; \[she\] in that both are people. In other words, the input is similar to example sentence (2). By mimicking the similar example (2), we finally get &quot;The chief is sharp&quot;.</Paragraph> <Paragraph position="4"> Although it is possible to obtain the same result by a word selection rule using fme-tuned semantic restriction, note that translation here is obtained by retrieving similar examples to the input.</Paragraph> </Section> <Section position="6" start_page="185" end_page="185" type="metho"> <SectionTitle> * Example Database </SectionTitle> <Paragraph position="0"> (data for &quot;kireru'\[cut / be sharp\]) (1) houchou wa klrsru -> The kitchen knife cuts. (2) kanojo wa kireru -> She Is sharp. * Input</Paragraph> <Section position="1" start_page="185" end_page="185" type="sub_section"> <SectionTitle> 2.2 DISTANCE </SectionTitle> <Paragraph position="0"> Retrieving similar examples to the input is done by measuring the distance of the input to each of examples. The smaller a distance is, the more similar the example is to the input. To define the best distance metric is a problem of EBMT not yet completely solved. However, one possible definition is shown in section 4.2.2.</Paragraph> <Paragraph position="1"> From similar examples retrieved, EBMT generates the most likely translation with a reliability factor based on distance and frequency. If there is no similar example within the given threshold, EBMT tells the user that it cannot translate the input.</Paragraph> </Section> </Section> <Section position="7" start_page="185" end_page="186" type="metho"> <SectionTitle> 3 BROAD APPLICABILITY AND INTEGRATION </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="185" end_page="185" type="sub_section"> <SectionTitle> 3.1 BROAD APPLICABILITY </SectionTitle> <Paragraph position="0"> EBMT is applicable to many linguistic phenomena that are regarded as difficult to translate in conventional RBMT. Some are well-known among researchers of natural language processing and others have recently been given a great deal of attention.</Paragraph> <Paragraph position="1"> When one of the following conditions holds true for a linguistic phenomenon, RBMT is less suitable than EBMT.</Paragraph> <Paragraph position="2"> (Ca) Translation rule formation is difficult.</Paragraph> <Paragraph position="3"> (Cb) The general rule cannot accurately describe phenomena because it represents a special case, e.g., idioms.</Paragraph> <Paragraph position="4"> (Cc) Translation cannot be made in a compositional way from target words (Nagao 1984; Nitta 1986; Sadler 1989b).</Paragraph> <Paragraph position="5"> This is a list (not exhaustive) of phenomena in J-E translation that are suitable for EBMT: * optional cases with a case particle ( &quot;- de&quot;, &quot;~ hi&quot;,...) * subordinate conjunction (&quot;- ba -&quot;, &quot;~ nagara -&quot;, &quot;~ tara -&quot;,...,&quot;- baai ~&quot;,...) * noun phrases of the form '~1 no N2&quot; * sentences of the form &quot;N~ wa N 2 da&quot; * sentences lacking the main verb (eg. sentences of the form &quot;~ o-negaishimasu&quot;) * fragmental expressions Chai&quot;, &quot;sou-desu&quot;, &quot;wakarimashita&quot;,...) (Furuse et al. 1990) * modality represented by the sentence ending C-tainodesuga&quot;, &quot;~seteitadakimasu&quot;, ...) (Furuse et al. 1990) * simple sentences (Sato and Nagao 1989) This paper discusses a detailed experiment for &quot;N~ no N2&quot; in section 4 and prospects for other phenomena, &quot;N1 wa N2 da&quot; and &quot;~ o-negaishimasu&quot; in section 5.</Paragraph> <Paragraph position="6"> Similar phenomena in other language pairs can be found. For example, in Spanish to English translation, the Spanish preposition &quot;de&quot;, with its broad usage like Japanese &quot;no&quot;, is also effectively Iranslated by EBMT. Likewise, in German to English translation, the German complex noun is also effectively translated by EBMT.</Paragraph> </Section> <Section position="2" start_page="185" end_page="186" type="sub_section"> <SectionTitle> 3.2 INTEGRATION </SectionTitle> <Paragraph position="0"> It is not yet clear whether EBMT can or should deal with the whole process of translation. We assume that there are many kinds of phenomena.</Paragraph> <Paragraph position="1"> Some are suitable for EBMT, while others are suitable for RBMT.</Paragraph> <Paragraph position="2"> Integrating EBMT with RBMT i s expected to be useful. It would be more acceptable for users if RBMT were first introduced as a base system, and then incrementally have its translation performance improved by attaching EBMT components. This is in the line with the proposal in Nagao (1984). Subsequently, we proposed a practical method of integration in previous papers (Sumita et al. 1990a, b).</Paragraph> </Section> </Section> <Section position="8" start_page="186" end_page="186" type="metho"> <SectionTitle> 4 EBMT FOR &quot;N x no Nz&quot; 4.1 THE PROBLEM </SectionTitle> <Paragraph position="0"> &quot;N~ no N2&quot; is a common Japanese noun phrase form. &quot;no&quot; in the &quot;Nt no Nz&quot; is a Japanese adnominal particle. There are other variants, including &quot;deno&quot;, &quot;karano&quot;, &quot;madeno&quot; and so on. Roughly speaking, Japanese noun phrases of the form &quot;N~ no N2&quot; correspond to English noun phrases of the form &quot;N2 of N:&quot; as shown in the examples at the top of Figure 2.</Paragraph> <Paragraph position="1"> Japanese English youka n o gogo the afternoon o f the 8th kaigi no mokuteki the object o f the conference ..........................................</Paragraph> <Paragraph position="2"> kaigi n o sankaryou the application fee for the conf. ?the application fee o fthe conf.</Paragraph> <Paragraph position="3"> kyoutodenokaigi theconf, in Kyoto .'/the conf. o f Kyoto isshukan no kyuka a week' s holiday ?the holiday o f a week mittsu no hoteru three hotels *hotels o fthree Figure 2 Variations in Translation of &quot;N1 no N2&quot; However, &quot;N2 of Nt&quot; does not always provide a natural translation as shown in the lower examples in Figure 2. Some translations are too broad in meaning to interpret, others axe almost ungrammatical. For example, the fourth one, &quot;the conference of Kyoto&quot;, could be misconstrued as &quot;the conference about Kyoto&quot;, and the last one, &quot;hotels of three&quot;, is not English. Natural translations often require prepositions other than &quot;of&quot;, or no preposition at all. In only about one-fifth of &quot;N~ no N2&quot; occurrences in our domain, &quot;N2 of Nt&quot; would be the most appropriate English translation. We cannot use any particular preposition as an effecdve de.fault value.</Paragraph> <Paragraph position="4"> No rules for selecting the most appropriate translation for &quot;N~ no N2&quot; have yet been found. In other words, the condition (Ca) in section 3.1 holds. Selecting the translation for '~1~ no N2&quot; is still an important and complicated problem in J-E translation.</Paragraph> <Paragraph position="5"> In contrast with the preceding research analyzing &quot;NI no N2&quot; (Shimazu et al. 1987; Hirai and Kitahashi 1986), deep semantic analysis is avoided because it is assumed that translations appropriate for given domain can be obtained using domain-specific examples (pairs of source md target expressions). EBMT has the advantage that it can directly return a translation by adapting examples without reasoning through a long chain of rules.</Paragraph> </Section> <Section position="9" start_page="186" end_page="187" type="metho"> <SectionTitle> 4.2 IMPLEMENTATION 4.2.1 OVERVIEW </SectionTitle> <Paragraph position="0"> The EBMT system consists of two databases: an example database and a thesaurus; and also three translation modules: analysis, example-based transfer, and generation (Figure 3).</Paragraph> <Paragraph position="1"> Examples (pairs of source phrases and their translations) are extracted from ATR's linguistic database of spoken Japanese with English translations. The corpus contains conversations about registering for an international conference (Ogura The thesaurus is used in calculating the semantic distance between the content words in the input and those in the examples. It is composed of a hierarchical structure in accordance with the thesaurus of everyday Japanese written by Ohno and Hamanishi (1984).</Paragraph> <Paragraph position="2"> with an actual sample. First, morphological analysis is performed for the input phrase,&quot;kyouto\[Kyoto\] deno kaigi \[conference\]&quot;. In this case, syntactical analysis is not necessary. Second, similar examples are retrieved from the database. The top five similar examples are shown. Note that the top three examples have the same distance and that they are all translated with &quot;in&quot;. Third, using this rationale, EBMT generates &quot;the conference in Kyoto&quot;. The distance metric used when retrieving examples is essential and is explained hem in detail. we suppose that the input and examples (I, E) in the d~tAl~ase ~ r~ted in the same data structure, i.e., the list of words' syntactic and semantic attribute values (refeaxed to as and I~, E~) for each phrase.</Paragraph> <Paragraph position="3"> The attributes of the current target, &quot;Nt no N2&quot; , 8~ as follows: 1) for the nouns &quot;NI&quot; and &quot;N2&quot;: the lexical subcategory of the noun, the existence of a prefix or suffix, and its semantic code in the thesaurus; 2) for the adnominal particle &quot;no&quot;: the kinds of variants, &quot;deno&quot;, &quot;karano&quot;, &quot;madeno&quot; and so on. Here, for simplicity, only the semantic code and the kind of adnominal a=e considered.</Paragraph> <Paragraph position="4"> Distances ae calculated using the following two expressions (Sumita et al. 1990a, b):</Paragraph> <Paragraph position="6"> (2) wi=,~// ~. ( freq. of t. p. when Ei=li ) 2 t.p.</Paragraph> <Paragraph position="7"> The attribute distance, d(li, E.~ end the weight of attribute, w~ are explained in the following sections. Each Iranslation pattern (t.p.) is abstracted from an example md is stored with the example in the example d~mhase \[see Figure 6\].</Paragraph> </Section> <Section position="10" start_page="187" end_page="188" type="metho"> <SectionTitle> (a) ATTRIBUTE DISTANCE </SectionTitle> <Paragraph position="0"> For the attribute of the adnominal particle &quot;no&quot;, the distance is 0 or 1 depending on whether or not they match exactly, for example, d(&quot;deno&quot;,&quot;deno&quot;) = 0 and d(&quot;deno&quot;, &quot;no&quot;) = 1. For semantic attributes, however, the distance varies between 0 and 1. Semantic distance d(0 < d < 1)is determined by the Most Specific Common Abstractlon(MSCA) (Kolodner and Riesbeck 1989) obtained from the thesaurus abstraction hierarchy. When the thesaurus is (n+l) layered, (k/n) is assigned to the concepts in the k-th layer from the bottom. For example, as shown with the broken line in Figure 5, the MSCACkaigi '' \[conference\], &quot;taizai&quot; \[stay\]) is &quot;koudou&quot; \[actions\] and the distance is 2/3. Of course, 0 is assigned when the MSCA is the bottom class, for instance, MSCACkyouto&quot;\[Kyoto\], &quot;toukyou&quot; \[Tokyo\])= &quot;timei&quot;\[placc\], or when nouns are identical ( The weight of the attribute is the degree to which the attribute influences the selection of the translation pattern(t.p.). We adopt the expression (2) used by Stanfill and Waltz (1986) for memory-based reasoning, to implement the intuition.</Paragraph> <Paragraph position="1"> t.p. freq.</Paragraph> <Paragraph position="2"> In Figure 6, all the examples whose E2 = &quot;deno&quot; aze translated with the same preposition, &quot;in&quot;. This implies that when El= &quot;deno&quot;, E2 is an attribute which heavily influences the selection of the translation pattern. In contrast to this, the translation patterns of examples whose E1 = &quot;timei&quot;\[place\], =e varied. This implies that when E1 -- &quot;timei&quot;\[place\], E~is an attribute which is less influential on the selection of the translation pattern.</Paragraph> <Paragraph position="3"> According to the expression (2), weights for</Paragraph> <Paragraph position="5"> The distance between the input and the first example shown in Figure 4 is calculated using the weights in section 4.2.2 Co), attribute distances as explained in section 4.2.2 (a) and expression (1) at the beginning of section 4.2.2.</Paragraph> <Paragraph position="6"> d( &quot;kyouto'\[Kyoto\] &quot;deno'\[in\] &quot;kaigi'\[ conference\],</Paragraph> <Paragraph position="8"/> <Section position="1" start_page="188" end_page="188" type="sub_section"> <SectionTitle> 4.3 EXPERIMENTS </SectionTitle> <Paragraph position="0"> The current number of words in the corpus is about 300,000 and the number of examples is 2,550.</Paragraph> <Paragraph position="1"> The collection of examples from another domain is in progress.</Paragraph> <Paragraph position="2"> In ~ to roughly estimate translation performance, a jackknife experiment was conducted. We partitioned the example database(2,550) in groups of one hundred, then used one set as input(100) and translated them with the rest as an example database (2,450). This was repeated 25 times.</Paragraph> <Paragraph position="3"> rate is 78%, the minimum 70% and the maximum 89% \[see section 4.3.4\].</Paragraph> <Paragraph position="4"> It is difficult to fairly compare this result with the success rate of the existing MT system. However, it is believed that current conventional systems can at best output the most common translation pattern, for example, &quot;B of A&quot;, as the default. In this case, the average success rate may only be about 20%.</Paragraph> </Section> </Section> <Section position="11" start_page="188" end_page="189" type="metho"> <SectionTitle> 4.3.2 SUCCESS RATE PER NUMBER OF EXAMPLES </SectionTitle> <Paragraph position="0"> Figure 8 shows the relationship between the success rate and the number of examples. Of the twenty-five cases in the previous jackknife test, three are shown: maximum, average, and minimum. This graph shows that, in general, the more examples we have, the better the quality \[see section</Paragraph> </Section> <Section position="12" start_page="189" end_page="189" type="metho"> <SectionTitle> 4.3.3 SUCCESS RATE PER DISTANCE </SectionTitle> <Paragraph position="0"> Figure 9 shows the relationship between the success rate and the distance between the input and the most similar examples retrieved.</Paragraph> <Paragraph position="1"> This graph shows that in general, the smaller the distance, the better the quality.</Paragraph> <Paragraph position="2"> In other words, EBMT assigns the distance between the input and the retrieved examples us a reliability The following represents successful results: (1) the noun phrase &quot;kyouto-eki \[Kyoto-station\] no o-mise \[store\]&quot; is wansta_!ed according to the translation pattern &quot;B at A&quot; while the similar noun phrase, &quot;kyouto\[Kyoto\] no shiten \[branch\]&quot; is translated according to the translation pattern &quot;13 in A&quot;; (2) the noun phrase of the form &quot;N~ no hou&quot; is translated according to the translation pattern &quot;A&quot;, in other words, the second noun is omitted.</Paragraph> <Paragraph position="3"> We ~e now studying the results carefully ~d are striving to improve the success rate.</Paragraph> <Paragraph position="4"> (a) About half of the failures are caused by a lack of similar examples. They are easily solved by adding appropriate examples.</Paragraph> <Paragraph position="5"> Co) The rest are caused by the existence of similar examples: (1) equivalent but different examples are retrieved, for instance, those of the form, &quot;B of A&quot; and &quot;AB&quot; for &quot;rolm-gatsu \[June\] no futsu-ka \[second\]&quot;. This is one of the main reasons the graphs (Figure 7 and 8) show an up-and-down pattern. They can be regarded as a correct translation or the distance calculation may be changed to handle the problem; (2) Because the current distance calculation is inadequate, dissimilar examples are retrieved.</Paragraph> </Section> <Section position="13" start_page="189" end_page="190" type="metho"> <SectionTitle> 5 PHENOMENA OTHER THAN </SectionTitle> <Paragraph position="0"> &quot;N 1 no Nz&quot; This section studies the phenomena, &quot;N1 wa N2 da&quot; and &quot;- o-negaishimasu&quot; with the same corpus used in the previous section.</Paragraph> <Section position="1" start_page="189" end_page="190" type="sub_section"> <SectionTitle> 5.1 &quot;N x wa N~ da&quot; </SectionTitle> <Paragraph position="0"> A sentence of the form &quot;N\] wa N2 da&quot; is called a &quot;da&quot; sentence. Here &quot;N{' and '~2&quot; ~e nouns, &quot;wa&quot; is a topical particle, and &quot;da&quot; is a kind of verb which, roughly speaking, is the English copula &quot;be&quot;.</Paragraph> <Paragraph position="1"> The correspondences between &quot;da&quot; sentences and the English equivalents are exemplified in Figure 10. Mainly, &quot;N~ wa N2 da&quot; corresvonds to '~ be Nz&quot; like (a-l) - (a-4).</Paragraph> <Paragraph position="2"> However, sentences like (b) - (e) cannot be translated according to the translation pattern ,N~ be N2&quot;. In example (d), there is no Japanese counterpart of &quot;payment should be made- by&quot;. The English sentence has a modal, passive voice, the verb make, and its object, payment, while the Japanese sentence has no such correspondences. This translation cannot be made in a compositional way from the target words which ale selected from a normal dictionary. It is difficult to formulate rules for the translation and to explain how the translation is made. The conditions (Ca) and (Co) in section 3.1 hold true.</Paragraph> <Paragraph position="3"> Conventional approaches lead to the understanding of&quot;da&quot; sentences using contextual and exwa-linguistic information. However, many translations exist that are the result of human translators' understanding. Translation can be made by mimicking such similar examples.</Paragraph> <Paragraph position="4"> (e) the conference will end on N= saishuu-bi\[final day\] 10qatsu12nichi\[Oct. 12th\] Figure 1 0 Examples of &quot;N1 wa N2da&quot; The distribution of N\] and N2 in the examples of our corpus vary for each case. Attention should be given to 2-tuples of nouns, (N1, N2). N2s of (a-4), (13) and (c) are similar, i.e., both mean &quot;prices&quot;. However N~s are not similar to each other. Nls of (a-4) and (d) ~e similar, i.e., both mean &quot;fee&quot;. However, the N2s ~e not similar to each other. Thus, EBMT is applicable.</Paragraph> </Section> <Section position="2" start_page="190" end_page="190" type="sub_section"> <SectionTitle> 5.2 &quot;~ o-negaishimasu&quot; </SectionTitle> <Paragraph position="0"> (a) may I speak to N (b) please give me N (c) please pay by N possible by finding substitutes in Japanese for give me and pay by, respectively. The conditions (Ca) and (Cc) in section 3.1 hold. Usually, this kind of supplement is done by contextual analysis. However, the connection between the missing elements and the noun in the examples is strong enough to reuse, because it is the product of a combination of translator expertise and domain specific restriction. Examples (a), (d) and (e) are idiomatic expressions. The condition (Cb) holds. The distribution of the noun and the particle in the examples of our corpus varies for each case in the same way as in the &quot;da&quot; sentence. EBMT is applicable.</Paragraph> </Section> </Section> <Section position="14" start_page="190" end_page="190" type="metho"> <SectionTitle> 6 CONCLUDING REMARKS </SectionTitle> <Paragraph position="0"> Example-Based Machine Translation (EBMT) has been proposed. EBMT retrieves similar examples (pairs of source and target expressions), adapting them to translate a new source text.</Paragraph> <Paragraph position="1"> The feasibility of EBMT has been shown by implementing a system which translates Japanese noun phrases of the form '~1 no N2&quot; into English noun phrases. The result of the experiment was encouraging. Bnaed applicability of EBMT was shown by studying the d~m from the text corpus. The advantages of integrating EBMT with RBMT were also discussed. The system has been written in Common Lisp, and is running on a Genera 7.2 Symbolics Lisp Machine at ATR.</Paragraph> </Section> <Section position="15" start_page="190" end_page="190" type="metho"> <SectionTitle> (1) IMPROVEMENT </SectionTitle> <Paragraph position="0"> The more elaborate the RBMT becomes, the less expandable it is. Considerably complex rules concerning semantics, context, and the real world, are required in machine translation. This is the notorious AI bottleneck: not only is it difficult to add a new rule to the database of rules that are mutually dependent, but it is also difficult to build such a rule database itself. Moreover, computation using this huge and complex rule database is so slow that it forces a developer to abandon efforts to improve the system. RBMT is not easily upgraded.</Paragraph> <Paragraph position="1"> However, EBMT has no rules, and the use of examples is relatively localized. Improvement is effected simply by inputting appropriate examples into the database. EBMT is easily upgraded, which the experiment in section 4.3.2 has shown: the more examples we have, the better the quality.</Paragraph> <Paragraph position="2"> (2) RELIABILITY FACTOR One of the main reasons users dislike RBMT systems is the so-called &quot;poisoned cookie&quot; problem. RBMT has no device to compute the reliability of the result. In other words, users of RBMT cannot trust any RBMT translation, because it may be wrong without any such indication from system.</Paragraph> <Paragraph position="3"> Consider the case where all translation processes have been completed successfully, yet, the result is incorrect.</Paragraph> <Paragraph position="4"> In EBMT, a reliability factor is assigned to the translation result according to the distance between the input and the similar examples found \[see the experiment in section 4.3.3\]. In addition to this, retrieved examples that are similar to the input convince users that the translation is accurate.</Paragraph> </Section> <Section position="16" start_page="190" end_page="191" type="metho"> <SectionTitle> (3) TRANSLATION SPEED </SectionTitle> <Paragraph position="0"> RBMT translates slowly in general because it is really a large-scale rule-based system, which consists of analysis, transfer, and generation modules using syntactic rules, semantic restrictions, structural transfer rules, word selections, generation rules, and so on. For example, the Mu system has about 2,000 rewriting and word selection rules for about 70,000 lexical items (Nagao et al. 1986).</Paragraph> <Paragraph position="1"> As recently pointed out (Furuse et al. 1990), conventional RBMT systems have been biased toward syntactic, semantic, and contextual analysis, which consumes considerable computing time.</Paragraph> <Paragraph position="2"> However, such deep analysis is not always necessary or useful for translation.</Paragraph> <Paragraph position="3"> In contrast with this, deep semantic analysis is avoided in EBMT because it is assumed that translations appropriate for given domain can be obtained using domain-specific examples (pairs of source and target expressions). EBMT directly returns a translation without reasoning through a long chain of rules \[see sections 2 and 4\].</Paragraph> <Paragraph position="4"> There is fear that retrieval from a large-scale example database will prove too slow. However, it can be accelerated effectively by both indexing (Sumita and Tsutsumi 1988) and parallel computing (Sumita and Iida 1991).</Paragraph> <Paragraph position="5"> These processes multiply acceleration. Consequently, the computation of EBMT is acceptably efficient.</Paragraph> </Section> class="xml-element"></Paper>