AN APPLICATION OF MONTAGUE GRAMMAR TO ENGLISH-JAPANESE MACHINE TRANSLATION 
Toyoaki NISHIDA and Shuji DOSHITA 
Dept. of Information Science 
Faculty of Engineering, Kyoto University 
Sakyo-ku, Kyoto 606, JAPAN 
ABSTRACT 
English-Japanese machine translation 
requires a large amount of structural transfor- 
mations in both grammatical and conceptual level. 
In order to make its control structure clearer 
and more understandable, this paper proposes a 
model based on Montague Gramamr. Translation 
process is modeled as a data flow computation 
process. Formal description tools are developed 
and a prototype system is constructed. Various 
problems which arise in this modeling and their 
solutions are described. Results of experiments 
are shown and it is discussed how far initial 
goals are achieved. 
I. GOAL OF INTERMEDIATE REPRESENTATION DESIGN 
Differences between English and Japanese 
exist not only in grammatical level but also in 
conceptual level. Examples are illustrated in 
Fig.l. Accordingly, a large amount of transfor- 
mations in various levels are required in order 
to obtain high quality translation. The goal of 
this research is to provide a good framework for 
carrying out those operations systematically. 
The solution depends on the design of intermedi- 
ate representation (IR). Basic requirements to 
intermediate representation design are listed 
below. 
a) Accuracy: IR should retain logical conclu- 
sion of natural language expression. The follow- 
ing distinctions, for example, should be made in 
IR level: 
- partial/total negation 
- any-exist/exist-any 
"- active/passive 
- restrictive use/ nonrestrictive use, etc. 
In other words, scope of operators should be 
represented precisely. 
GRAMMATICAL difference 
a) Case Marking: 
<E>: (relative position) + preposition 
<J>: postposition (called JOSHI) 
b) Word Order 
\[) simple sentence 
<E>: S+V+O : ~a~ 
<J>: s÷o÷v : WAT-ASHi ~O ~ff~ET~ 
ii) preposition vs postposit\[on 
<E>: PREP+NP : ,in, Lthe refrigerator, 
<J>: NP÷JOS~I : q~-{I?N~ 
iii) order of modification 
<E>: NP÷POSTMODIF\[ER: an apple on the box, 
<J>: PRENOMINAL MODIFIER+NP: HAKO NO UE NO RINGO 
LEXICAL difference 
<E> <J> 
translate HONYAKU SURU 
interpret ~ ~ KAISHAKU SURU 
understand RIKAI SURU 
grasp / TSUKAMU 
hold ~ TAMOTSU 
keep MAMORU 
.,, 
CONCEPTUAL difference 
<E2 her arrival makes him happy 
~.. \[s needed paraphrasing 
<j> KARE WA KANOJO GA TOUCHAKU SHITA NODE 
URESHII. 
(he becomes happy because she has arrived) 
Fig.l. Examples of Differences between English 
and Japanese. 
<E>: English; <J>: Japanese. 
156 
b) Ability of representing semantic relations: 
In English-Japanese translation, it is often the 
case that a given English word must be translated 
into different Japanese words or phrases if it 
has more than one word meanings. But it is not 
reasonable to capture this problem solely as a 
problem of word meaning disambiguation in analy- 
sis phase; the needed depth of disamb£iuation 
depends on target language. So it is also 
handled in transfer phase. In general, meaning 
of • given word is recognized based on the rela- 
tion to other constituents in the sentence or 
text vhicb is semantically related to the given 
word. To make this poaslble in transfer phase, 
IR must provide a link to semantically related 
constituents of a given item. For example, an 
object of a verb should be accessible in IR level 
from the verb, even if the relation is implicit 
~n the surface structure (as., passives, relative 
claus=a, and their combinations, etc.) 
¢) Prediction of control: given an IR expres- 
sion, the model should be able to predict 
explicitIy what operations are co be done in what 
order. 
d) Lexicon driven: some sort of transforma- 
tion rules ere word specific. The IR interpreta- 
tion system should be designed Co deal with those 
word specific rules easily. 
e) Computability: All processing= should be 
effectively computable. Any IR is useless if it 
is not computable. 
2. PRINCIPLE OF TP, ANSLATION 
This section outlines our solution Co the 
requirements posed in the preceding section. We 
employ MonCague Gram=mr (HonCague 1974, Dowry 
1981) as a theoretical basis of translation model. 
Inter~edlate representation is designed based on 
intensional logic. Intermediate representation 
for a given natural language expression is 
obtained by what we call functional analysis. 
2.1 Functional Analysis 
In functional analysis, input sentence is 
decomposed into groups of constituents and 
interrelationship among those groups are analyzed 
in terms of function-argument relationships. 
Suppose a sentence: 
I don't have a book. (l) 
The functional analysis makes following two 
points: 
a) (L) is decomposed as: 
"I have a book" ÷ "nOt". (2) 
b) In the decomposition (2), "not" is an 
operator or function co "I have a book." 
The result of this analysis can be depicted as 
follows: 
~ ""I have a book" I (3) 
wherel >denotes a function and\[ Idenotes 
en argument. The role of "not" as a function is: 
"not" as a semantic operstor: 
it negates a given proposition; 
"not" is a syntactic operator: 
it inserts an appropriate auxiliary verb 
and = lexical item "not" into appropriate 
position of its argument. (4) 
This kind of analysis goes on further with 
embedded sentence until it is decomposed into 
lexical units or even morphemes. 
2.2 Montague Grammar as a Basic Theory 
Montague Grammar (MG) gives a basis of func- 
tlonel analysis. One of the advantages of MG 
consists in its interpretation system of function 
form (or intensional logical form). In MG, inter- 
pretation of an intenelonal logical formula is a 
mapping I from incenaional logical formulas to 
set theoretical domain. Important property is 
chat this ampping I is defined under the cons- 
trainC of compositlonality, that is, I satisfies: 
Z\[f(a,b .... )\]'I\[fl(Ha\],Z\[b\] .... ), (5) 
without regard to what f, a, b, etc. are. This 
property simplifies control structure and it also 
specifies what operations are done in what order. 
For example, suppose input data has a structure 
like: 
A 
For the sake of property (5), ~he interpretation 
of (6) is done as a data flow computation process 
as followa: 
A ~I\[A\] , | 
A "I 
Its c O } 
~7) 
By this property, we can easily grasp the process- 
ing stream. In particular, we can easily ~hooc 
trouble and source of abnormality when debugging 
a system. 
Due to the above property and others, Ln 
particular due to its rigorous framework based .)n 
Logic, MG has been studied in ~nformation science 
field (Hobbs 1978, Friedman |978, Yonezaki \[980, 
157 
Nishida 1980, Landsbergen 1980, Moran 1982, Moore 
1981, Rosenschein 1982, ...). Application of MG 
to machine translation was also attempted 
(Hauenschild 1979, Landsbergen 1982), but those 
systems have only partially utilized the power of 
MG. Our approach attempts to utilize the full 
power of MGo 
2.3 Application of Montague Grammar to 
Machine Translation 
In order to obtain the syntactic structure 
in Japanese from an intensional logical form, in 
the same way as interpretation process of MC, we 
change the semantic domain from set theoretical 
domain to conceptual domain for Japanese. Each 
conceptual unit contains its syntactic expression 
in Japanese. Syntactic aspect is stressed for 
generating syntactic structure in Japanese. 
Conceptual information is utilized for semantic 
based word choice end paraphrasing. 
For example, the following function in 
Japanese syntactic domain is assigned to • 
logical item "not": 
(LAMBDA (x) (SENTENCE x \[AUX "NAI"\])). (8) 
3.1 Definition of Formal Tools 
e) English oriented Formal Representation (EFR) 
is a version of intensional logic, and gives a 
rigorous formalism for describing the results of 
functional analysis. It is based on Cresswell's 
lambda deep structure (Cresawell 1973). Each 
expression has a uniquely defined type. Lambda 
form is employed to denote function itself. 
b) Conceptual Phrase Structure (CPS) is a data 
structure in which syntactic and semantic informa- 
tion of a Japanese lexicel unit or phrase struc- 
ture are packed. 
i) example of CPS for a lexical item: 
EIGO:\[NP "EIGO" with,ZSAmLANGUAGE; ...,\] (9) 
category; lexical item; conceptual info. 
; "EIGO" means English" language. 
ii) example of CPS for phrase structure: 
\[NP \[ADJ "AKAI" with ... \] 
\[NOUN "RINGO" with ... \] with ... \] (i0) 
Transfer-generation process for the sentence (1) 
looks like: 
"I don't have a book" 
~',,I have a book" I 
// 
• TRANSFER / 
(LAMBDA (x) 
{SENTENCE x \[AUX "NAI"\]}) 
TRANS FE R, GENE RAT I ON 
S 
WATASHI-WA HON-WO MOTSU ,,-..._./ 
S 
S AUX 
WATASHI-WA HON-WO MOTSU NAI 
MOTANAI 
; "AKAI" means red, and "RINGO" means apple. 
c) CPS Form (CPSF) is a form which denotes 
operation or function on CPS domain. It is used 
to give descriptions to mappings from EFR to CPS. 
Constituents of CPSF are: 
i) Constants: CPS. 
ii) Variables: x, y, ... . 
(indicated by lower case strings). 
iii) Variables with constraints: 
e.g., (! SENTENCE x). 
; variable x which must be 
of category SENTENCE. 
iv) Transformations: 
e.g., (? TENSE (TENSE-PAST~ x). 
indicator; operator-name; PARAMs; argumen~ 
v) CPS construction: 
e.g., <SENTENCE (x y) with ... 7. / \ 
new category; descendents 
vi) Conditionals: 
\[ <condition> I -> <CPSF>I; ... \]. 
vii) Lambda form: 
e.g., (LAMBDA (x) (+ PASSIVE () x)) 
Using those description tools, translation 
process is modeled as a three staged process: 
3. FORMAL TOOLS 
Formal description tools have been developed 
co provide a precise description of the idea men- 
tioned Ln the last section. 
stage I (analysis): anlyzes English 
sentence and extracts EFR form, 
stage 2 (transfer): substitutes CPSF to 
each lexical item in the EFR form, 
158 
not ) 
+NEG 
He does not always cOme late. 
always ( he ( late ( comes ) ) ) ) 
lq, kp 
S 
\[p< ~llIrF+V I A~V ,rll,l x IN\[p4 P IIII L IJl L 
itS \ Z.a'¢# z comes 
.A. 
L?e22::'J 
ADV 
aZzoays ~=e comes ZaCe 4/: /.s not; Cite case thuC 
~cotr4s ~Ce 
.. EFR .. 
~,RAN$FER) 
.. CPSF .. 
• . CPS .. 
Fig.2. Example of Translation Process // Prefix notation is used for CPSF, 
described using Formal Tools. / and syntactic aspect is emphasized. 
stage 3 (generation): evaluates the CPSF to 
get CPS; generation of surface structure 
from CPS is straightforward. 
In order to give readers an overall pers- 
pective, we illustrate an example in Fig.2. 
Note that the example illustrated includes 
partial negation. Thus operator "not" is 
given a wider scope than "always". 
In the remaining part of this section 
we will describe how to extract EFR expression 
from a given sentence. Then we will discuss the 
problem which arises in evaluating CPSF, and give 
its possible solution. 
3.2 Extracting EFR Expression from Input Sentence 
Rules for translating English into EFR form 
in .~ssociated with each phrase structure rules. 
159 
For example, the rule looks llke: 
NP -> DET+NOUN where <NP>-<DET>(<NOUN>) (ii) 
where, <NP> stands for an EFR form assigned tu 
~he NP node, etc. Rule (II) says chat EFR for an 
NP is a form whose function section is EFR for a 
DET node and whose argument section is EFR for a 
NOUN node. This rule can be incorporated into 
conventional natural language parser. 
3.3 Evaluation of CPSF 
Evaluation process of CPSF is a sequence of 
lambda conversions and tree ~ransformations. 
Evaluation of CPSF is done by a LISP ~ncerpreter- like algorithm. A problem which we call higher 
order problem arose in designing the evaluation 
algorithm. 
Higher Order Problem 
By higher order property we mean that there 
exist functions which take other functions as 
arguments (Henderson 1980). CPSF in fact has 
this property. For example, an adjective "large" 
is modeled as a function which takes a noun as 
its argument. For example, 
large(database), 
"large database" (12) 
On the other hand, adverbs are modeled as 
functions to adjectives, For example, 
very(large), extremely(large), 
comparatively(large), etc. (13) 
The difficulty with higher order functions 
consists in modifiction to function. For explana- 
tion, let our temporal goal be regeneration of 
English from EFR. Suppose we assign to "large" a 
lambde form like: 
(LAMBDA (x) (NOUN \[ADJ "LARGE"\] x>) (14) 
which takes a noun and returns a complex noun by 
attaching an adjective "large". If the adjective 
is modified by an adverb, say "very", we have to 
modify (14); we have to transform (14) into a 
lambda form like: 
(LASBDA (x) 
(NOUN \[ADJ \[ADV "VERY"\] 
\[ADJ "LARGE"\]\] x}), (15) 
which attaches a complex adjective "very large" 
to a given noun. As is easily expected, it is 
too tedious or even impossible to do this task 
in general. Accordingly, we take an alternative 
assignment instead of (14), namely: 
large <- \[ADJ "LARGE"\]. (16) 
Since this decision cuases a form: 
\[ADJ "LARGE"\](\[NOUN "DATABASE"\]), (17) 
to be created in the course of evaluation, we 
specify what to do in such case. The rule is 
defiend as follows: 
\[ADj\](\[NOUN\]) - \[NOUN \[ADJI \[NOUN\]\]. (18) 
~y\[(the(table)) 
(Ax\[(((*ap(on))(y))(block))(x)\])\], (20) 
; which may read: is y:\[there is a uniquely 
specified object y referred to by an NP "the 
table", such that y is a block which is 
restricted to be located on x.\] 
This lambda form is too complicated for tree 
transformation procedure to manipulate. So it 
should be transformed into equivalent CPS if it 
exists. The type of the lambda form is known 
from the context, namely one-place predicate. So 
if we apply the lambda form (20) to "known" 
entity, say "it", we can obtain sentence struc- 
ture like: 
SENTENCE 
UN PRED 
NOUN 
l 
NP NP JOSHI I /',, /'x I 
SORE WA TSUKUE NO UE NO BLOCK DEARU 
it a block on the ~able is 
(it is a block on the table) (21) 
From this result, we can infer that the lambda 
form (20) is equivalent to a noun: 
NOUN 
MODIFIER NOUN NP f  SN, I 
TSUKUE NO UE NO BLOCK 
(block on the table) (22) 
The extraction rule can be written as a pattern 
matching rule like: 
SENTENCE 
NP NP PRED 
I \ 
SORE WA x:NOUN DEARU 
(It is ~ z) 
x 
(23) 
This rule is called an application rule. 
In general, evaluation of \[ambda form 
itself results in a function value (function as a 
value). This causes difficulty as mentioned 
above. Unfortunately, we can't dispense with 
lambda forms; lambda variables are needed to link 
gap and its antecedent in relative clause, verb 
and its dependants (subject, object, etc), pre- 
position and its object, etc. For example, in 
our model, an complex noun modified by a PP: 
"block on the table" (19) 
£s assigned a following EFR: 
Of course, this way of processing is not 
desirable; it introduces extra complexity. But 
this is a trade off of employing formal seman- 
tics; the same sort of processing is also done 
rather opaque procedures in conventional MT 
system. 
4. MODELING TRANSLATION PROCESS 
This section illustrates how English- 
Japanese translation process is modeled using 
formal tools. Firstly, how several basic 
linguistic constructions are treated is described 
and then mechanism for word choice is presented. 
160 
4.1 Translating Basic Constructions of English 
a) Sentence: sentence consists of an NP and a 
VF. VP is analyzed as a one-place predicate, 
which constructs a proposition out of an indivi- 
dual referred Co by the subject. VP is further 
decomposed into intransitive verb or cranaltive 
verb + object. Intransitive verbs and transitive 
verbs ere analyzed as one-place predicates and 
two-place predicate, respectively. One-place 
predicate and two-place predicate are assigned a 
CFSF function which generates a sentence ouc of 
an individual and chat which generates a sentence 
out of a pair of individuals, respectively. Thus, 
a transitive verb "constructs" is assigned a CPSF 
form: 
(LAMBDA (x y) 
(SENTENCE 
(÷ CASE-MAR/~R (CASE=AGENT) x) 
(+ C~SE-MARi~R (CASE=OBJ) y) 
\[ FRED ICATE \[ VERB "SAKUSEI-SURU" \] J )), (24) 
; given two individuals, this function attaches 
co each argument a case marker (corresponding 
to JOSHI or Japanese postfix) and then gener- 
ates a sentence structure. 
The assignment (24) may be extended later to 
incorporate word choice mechanism. 
Treatment of NP in MonCague-besed semantics 
is significant in chat EFR expression for an NP is 
given a wider scope then Chat for a VP. Thus the 
EFR form for an ~P-VP construction looks llke: 
<~>(<w>), (25) 
where <x> means EFR form for x, x=NP,... . 
The reason is Co provide an appropriate model for 
English quantifier which is syntactically local 
but semantically global. For example, first 
order logical form for a sentence: 
"this command needs no operand" (267 
looks Like: 
nor(there-exists x 
\[needs("chis-command",x) & 
operand(x)\]), (27) 
where operator "not", which comes from a deter- 
miner "no", is given a wider scope than "needs". 
This translation is straightforward in our model; 
the following EFR is extracted from (26): 
(this(round)) 
Ax\[(no(operand))(ly\[needs(x,y)\])\]). (28) 
\[f we make appropriate assignment including: 
no <= (LAMBDA (p) 
(LAMBDA (q) 
"nor(there exists x 
\[p(x) & q(x)\])")), (29) 
we can get (27) from (28). 
161 
In Engllsh-Japanese -,-'chine translation, 
this treatment gives an elegant solution to the 
:ranalation of prenominal negation, partial nega- 
tion, etc. Since Japanese language does not have 
a synCactlc device for prenominal negation, "no" 
must be translated into asainly two separate 
constituents: one is a RENTAISHI (Japanese decer- 
miner) and another is an auxiliary verb of nega- 
tion. One possible assignment of CFSF looks like: 
no <= (LAMBDA (p) 
(U~NgDA (q) 
(, NEG () 
(q (~ "DONNA" (t NOUN p) ",~0"))))). 
(30) 
In general, correspondence of ~P and indivi- 
dual is indirect in EFR. The association of an 
NF with its referent x is indicated as follows: 
<~>(Ix{ ... x ... ;). 
i',enCence type 
one-place predlcaCe type 
; <NP> stands for EFR expression for NP. 
(31) 
Most of ocher NP's correspond co ice 
referent more directly. The application rule 
reflecting this fact is: 
\[NFJ(\[O~-eU~CE-PREDI) - \[ONE-PU~CE-FREOI(\[NP*\]), 
(32) 
where, ix\] stands for a CPS for x. 
b) Internal structure of NP: the below illus- 
trates the structure of EFR expression assigned 
CO an NP: 
<DET>(<MOD\[FIER>(...(<MDDIFIER>(<NOUN>)) ...)). 
(33) 
By <MOD£FIER> we mean modification to noun by 
adjectives, prepositional phrases, infinitives, present/past 
particles, etc. The translation 
process is determined by a CPSF assigned co <DET>, 
En cases of "the" or "a/an", translation process 
is abic complicated. Et is almost the same as 
the process described in detail in section 3: 
firstly the <MODIFIER>s and <NOUN> are applied Co 
an individual like "the chinE" (the) or "some- 
chinE" (a/an) and a sentence will be obtained; 
then a noun structure is extracted and appro- 
priate RENTAISHI or Japanese determiner is 
attached. 
c) Other cases: some ocher cases are illust- 
rated by examples in Fig.3. 
4.2"Word Choice Mechanism 
• In order to obtain high quality translation, 
word choice .~chanism must be incorporated at 
least for handling the cases like: 
i) subordinate clause: 
"When SI, S2" & 
(when (<SI >) ) (<$2>) 
"TOKI" \[$I\] 
\[\[SI\] "TOKI 's\] \[$2\] 
\[\[Sl\] "TOKI" \[S2\]\] 
2) tense, aspect, modal: 
"I bought a car" 
did(<I buy a car>) 
"TA" "WATASHI-WA JIDOUSHA-WO KAU" 
"WATASHI-WA JIDOUSHA-WO KAU TA" 
KATTA 
3) passive: 
" ... is broken ... " & 
... en(break) ... 
C~x~ ,,.GA,, y ,,.WO KOWASU ,, } .... 
~y{y "-GA KOWA SARERU" } .... 
; function "en" transforms a CPSF for 
a transitive verb into intransitive. 
4) interrogative: 
"Do ~ou have a car?" 
#ques(whether(<you have a car>)) 
+MKSENTENCE "KADOUKA .... ANATA-WA JIDOUSHA-WO MOTSU" 
-WA JIDOUSHA-W0 MOTSU-KADOUKA" 
"ANATA-WA JIDOUSHA-WO MOTSU-KA" 
"Which car do you have?" 
#ques((which(car))(~y\[<you have y>~)) 
+MKSENT~NCE I ...... { 
(Xp{p("DON0-JIDOUSHA) KA } I , ,, kk,,  .wA, y .wo 
~IDOUSHA-W0 MOTSU-KA" 
"ANATA-WA DONO-JIDOUSHA-WO MOTSU-KA" 
; indirect question is generated first, then it is 
transformed into a sentence. 
Fig.3. Examples of Translation of Basic English 
Construction. <x>, {x}, \[x\] and "x" stand 
for EFR for x, CPSF for x, CPS for x, and 
CPB for Japanese string x, respectively. 
verb in accordance with its object or its agent, 
adjective-noun, 
adverb-verb, and 
preposition. 
Word choice is partially solved in the analysis 
phase as a word meaning disambiguation. So the 
design problem \[s to determine to what degree 
word sense is disamblguated in the analysis phase 
and what kind of ambiguities is left until 
transfer-generation phase. Suppose we are to 
translate a given preposition. The occurence of 
a preposition \[s classified as: 
(a) when it is governed by verbs or nouns: 
(a-l) when governmant is strong: 
e.g., study on, belong to, provide for; 
(a-2) when govern.ment is weak: 
e.g., buy ... at store; 
(b) otherwise: 
(b-I) idiomatic: 
e.g., in particular, in addition; 
(b-2) related to its object: 
e.g., by bus, with high probability, 
without÷ING. 
We treat (a) and (b-l) as an analysis problem and 
handle them in the analysis phase. (b-2) is more 
difficult and is treated in the transfer- 
generation phase where partial semantic interpre- 
tation \[s done. 
162 
Word choice in transfer-generatlon phase is 
done by using, conditional expression and attri- 
butive information included in CPS. For example, 
a transitive verb "develop" is translated differ- 
ently according to its object: 
develop ~ (* system) ... KAINATSU-SURU 
t (+ film) GENZOU-SURU. (34) 
The following assignment of CPSF makes this choice 
poss ib le : 
deve lop 
<= (LAMBDA (x y) 
\[(CLASS y)=SYSTEM -> 
("x-GA y-WO KAIHATSU-SURU"} ; 
(CLASS y)-FILM -> 
("x-GA y-WO GENZOU-SURU"}; 
• .. \]), (35) 
operating-syStem 
<- \[NOUN "OS" with CLASS-system; ... \], (36) 
film 
<- \[NOUN "FUILUMU" with CLASS-film; ... 1. 
(37) 
To make this type of processing possible in the 
cases where the deep object is moved from surface 
object position by transformations, link infor- 
mation between verb and its (deep) object should 
be represented explicitly. The below shows bow 
it is done in the case of relative clause. 
Phrase Structu~ (for restrictive use): 
NP 
(which(Xx\[ .. x ... \]))(<noun>) 
link from head noun to 
place holder 
y activity 
gent: GA ~agent: NO 
ocatl on: NI \[1 ocati on: E-NO 
HONYAKU ~'~ 
res-obJ/~Ts Z=~7:°~/ 
//"~Cti vi ty { (agent: NO/NIYORU 
f ' ' '~=-~ ~Jobj :NO 
\[NONYAKU SURU~ adj-able _ ~source:KARANO 
((agent:C~ activitv-'~7 HONYAKU ~NOU NA~ 
/source:KARA I'~. ....... ,-= I ~ 
Ldest:E/N! L°D3~'su°3 -J (,fsubj :WA 
-)source: ~RA 
Ldest:E/NI 
5 • EXPERIMENTS 
CPSF assignment: 
whtch ~ (LAHBOA (P) (LAMBOA (Q) 
{NOUN (+ HK-HODIFIER () 
(P (+ MK-NULL-NP () O))) 
Q})), 
In EFR level, lambda variable x is explicit- 
ly used as a place holder for the gap. 
A functor "which" dominates both the EFR 
for the embedded sentence and that for 
the head noun. A CPSF assigned to the 
functor "which" sends conceptual informa- 
tion of the head noun to the gap as 
follows: firstly it creates a null NF 
out of the head noun, then the null NP 
is substituted into the lambda variable 
for the gap. 
In word choice or semantic based translation 
in general, various kinds of transformations are 
carried out on target language structure. For 
example, 
her arrival makes him happy, (38) 
must be paraphrased into: 
he becomes happy because she has arrived (39) 
since inanimate agent is unnatural in Japanese. 
In order to retrieve appropriate lexical item of 
target language for transformation, mutual rela- 
tions among lexlcal items are organized using 
network formalism (lexical net). The node repre- 
sents a lexicel item and a link represents an 
association with specification of what operation 
causes that link t() be passed through. \[t also 
contains description of case ~ransformation 
needed Ln order co map case structure appropriate- 
ly. The below illustrate s part of Lexical net: 
We have constructed a prototype system. 
It is slmplified then practical system in: 
- it has only limited vocabulary, 
- interactive disembiguation is done instead 
of automatic disambiguaCion, and 
- word choice mmchenism is limited to typical 
cases since overall definition of rules 
have not yet been completed. 
Sample texts are taken from real computer 
manuals or abstracts of computer journals. 
Initially, four sample texts (40 sentences) are 
chosen. Currently it is extended to I0 texts (72 
sentences). 
Additional features are introduced Ln order 
to make the system more practical. 
a) Parser: declarative rules are inefficient 
for dealing with sentences in real cexts. The 
parser uses production type rules each of which 
is classified according to its invocation condi- 
tion. Declarative rules are manually converted 
into this rule type. 
b) Automatic postedicor: transfer process 
defined so far concentrates on local processings. 
Even if certain kinds of ambiguities are re- 
solved in this phase, there still remains a 
possibility that new ambiguity is introduced in 
generation phase. Instead of incorporating into 
the transfer-generation phase a sophisticated 
mechanism for filtering out ambiguities, we 
attach a postprocessor which will "reform" a 
phrase structure yielding ambiguous output. Tree- 
tree transformation rules are utilized here. 
Current result of our machine cransLacion 
system is shown in Appendix. 
163 
6. DISCUSSION 
Having completed initial experiments, it is 
shown that our framework is applicable to real 
texts under plausible assumption. The prototype 
system has a clear architecture. Central rule 
interpreter contains no complicated parts. 
Although several errors occured in the implementa- 
tion of translation rules, they were easily 
detected and eliminated for the sake of data flow 
property. 
The initial requirement for intermediate 
representation are filled in the following way: 
Requirement a: precise representation based 
on intensioual logic, 
Requirement b: using lambda variables and 
scope rules, 
Requirement c: data flow computing model 
based on compositionality, 
Requirement d: any CPSF can be assigned 
to a given lexical item 
if type is agreed, 
Requirement e: fact that computer model 
has been implemented. 
Some essential problems are left unsolved. 
I) Scope analysis: correct analysis of scope of 
words are crucial but difficult. For example, 
scope relation of auxiliary and "not" differs 
case by case: 
he can't swim 
-> not(can(<he>,<swim>)) (A0) 
you should not eat the banana 
-> should(not(<eat the banana>)) (41) 
it may not be him 
-> may(not( <it-he> )) (42) 
you may not eat the banana 
-> not(may( <you eat banana>)) (43) 
2) Logic vs machine translation: The sentence 
(44) is logically equivalent to (45), but 
that paraphrasing is bad in machine translation. 
he reads and writes English. (44) 
he reads English and he writes English. (45) 
7. CONCLUSION 
Application of formal semantics to machine 
translation brings about new phase of machine 
translation. It makes the translation process 
clearer than conventional systems. The theory 
has been tested by implementing a prototype, 
which can translate real texts with plausible 
human assist. 


164 
INPUT TEXT 
APPENDIX: Translation of a Sample Text. 
}((h¢.*ne: ,, a %~qem (or IOcai communlcat,on among computing statiOns Our experlmcn\[ai 
E.thcrnc; u~;.: ~ppcc coaxial eabl~ Io c~rn ~urlaoie-len~th dlgltal data packets among, for example, 
pcrsonai minicomputers, pr~nung f'aciliues, iar~¢ ~ie s~orage de,.~ces, magnetic r~pe backup stauons. 
lar~er cenlra! computers, and longer-haul communlcauor~ equzpment. 
The ,~hared communicauon facilit.~, a branchm8 E~er. ~s passive. A sIauons E~heme~ interface 
connecL~ b,-sonalb through an interface cabie to a Lranscezver which in turn ~ps mLo the passing 
F/her 4 packet is hmadcas{ onto the F:'ther. is heard b.~ all smr/ons, and is cop~ed from the Er.her 
b.~ desunauons ~.hich soiL'c: ~! accorain~ to the packe:s leadm8 address bits. This ,s 0madc.~l 
packe: s~tching alld shouic be disunguzshec~ from s(ore-and-t'or~ard packe( switchin 8 m wh,ch 
muun9 ~ nerformed h~ mtermedmte pruccssm~ elements. To handle {he demand~ of ~,rowth. an 
F/heine! can be ex~ended usm@ packet repeaters (or signaJ regeneration, packe{ filters t'or crar~c 
locaJzzauon, an(~ p~ket gate~a.vs /'or intcmetwurk address extension. 
Control is completeb dnstrioutea among stauons with packet transmissions coordinated ',nmugh 
sr, austical arbitration. Transmissions inl~ated b) a s~aoon defer ~o an)' which may' alread.~ be m 
progress. Once s~arted, if interference v,.:d~ ocher packe~ ~s detected, a transmission ts aborted and 
reschedu\[ed b;, ~LS source s~auon. A~er a certain period of interference-tree transmission, a packet 
is heard b.v all s\[aoons and will run to completion without interference. E~ernet controllers m 
colliding sauons each generate random retransrniss~on inten-ab to avoid repeated co\[iismns. The 
mean of a packer's retransmission inter, aiS is adjusted as a f~ncUon of co\]hsion histon. to keep 
Ether uulizauon near ~le opumum v.-,.h changing network load. 
E~en ~,nen transmuted w~thout source-detected interference, a packet may still not reach z~ 
destination w~thouz error: thus. packets are delivered only ~.~th high probabilio'. Scauons requmng a 
residual error rate lower than thai provided b.~ the bare Ethemet packet transport mechanism muSl 
follo~ mutually agreed upon packet protOcols. 
cCted from: MeCcalfe, R.M. and 8oggs, D.R. (1975): E~hernec: D1scrlbuced Packec~ 
Swi~chin 8 for Local Computer Networks, CSL-75-7, Xerox. 
OUTPUT TEXT 
cranslaglon is carried 
oue sengence by sentence; 
the result is assembled 
by hand. 
=-9, EI\]~..~7 ~ d Ju09x ~ U - 9"o~'~. ~ll~'-~,'~':, ~' "," J'2"x~ = - ~ 3 
I%~ ='r ~ ~, ~- n,,~- -- ~a, ~ ;~o 
097- xx9 = - :~ ~ > C2 o (ff.~I~1~T ~ 6 ~, ~7.~ ">'_~ -~ ~ ~-.3, ~;o'~?;~, 
& ~ i v -- x~T~,~/~:~ L C,~ "~ ~ .9,'f~.~ ~ ~." L I~'<. ~ ~ ~ ~.r.-~ -- .~' : 
t~ ;~.. ~ ¢)l~¢) E THI-= RNET, ~'r v b ~I~/~IL " • --, Tt~f~ ~ tl "5 L er)j: ;) 'L ~" ,~f~.~r) 
\[65 

REFERENCES 

Cresswell, M.J. (1973): Logic and Languages, 
Methuen and Company. 

Dowry, R. et al (1981): Introduction to Montague 
Semantics, Reidel. 

Friedman, J. (1978): Evaluating English Sentences 
in a Logical Model, Abstract 16, COLING 78. 

Hauenschild, C., etal. (1979): SALAT: Machine 
Translation Via Semantic Representation, 

Bauerle et al.(eds.): Semantics From Different 
Points of View, Springer-Verlag, 324-352. 

Henderson, P.(1980): Functional Programming -- 
Application and Implementation, Prentice/Hall. 

Hobbs, J.R. and Rosenschein, S.J. (1978): Making 
Computational Sense of Montague's Incensional 
Logic, AI 9, 287-306. 

Landsbergen, J. (1980): Adaptation of Montague 
Grammar to the Requirement of Question Answering, Proc. COLING 80, 211-212. 

Landsbergen, J. (1982): Machine Translation based 
on Logically Isomorphic Montague Grammars, 
Proc. COLING 82. 

Montague, R. (1974): Proper Treatment of Quantification in Ordinary English, Thompson (ed.) 
Formal Philosophy, Yale University, 247-270. 

Moore, R.C. (1981): Problems in Logical Form, 
Proc. 19th Annual Meeting of the ACL, 117-124. 

Moran, D.B. (1982): The Representation of 
Inconsistent Information in a Dynamic Model-Theoretic Semantics, Proc. 20th Annual Meeting 
of the ACL, 16-18. 

Nishida, T. and Doshita, S. (1980): Hierarchical 
Meaning Representation and Analysis of 
Natural Language Documents, Proc. COLING 80, 
85-92. 

Rosenschein, S.J. and Shieber, S.M.(1982): Translating English into Logical Form, Proc. 20th 
Annual Meeting of the ACL, l-S. 

Yonezaki, H. and Enomoto, H. (1980): Database 
System Based on Intensional Logic, Proc. COLING 
80, 220-227. 
