iNVESTIGATING THE POSSIBILITY OF A HICROPROCESSOR-BASED 
MACIIINE TRANSLATTON SYSTEM 
Harold L. Somers 
Centre for Computational Linguistics 
University of Hanchester Institute of Science and Technology 
PO Box RR, Manchester H60 tqO, England 
ABSTRACT 
This paper describes an on-goin~ research 
project being carried out by staff and students ac 
the Centre for Computational Linguistics co 
examine the feasibility of Hachine Translation 
(~T) in a microprocessor environment. The system 
incorporates as far as ~ossihle ~eacures of large- 
scale HT systems ~hac have proved desirable or 
effective: it is mutCilinRual, algorithms and 
da~a are strictly separated, and the system is 
hi=hly modular. Problems of terminological 
polysemy and syntactic complexity are reduced via 
the notions of controlled vocabulary and 
restricted syntax. ~iven these constraints, iE 
seems feasible Co achieve transtacion via an 
'interttngua', avoidin~ any language-pair oriented 
'transfer' scare. The paper concentrates on a 
description of the separate modules in the 
transla~ion process as they are currently 
envisaged, and decatts some of the problems 
specific to the microprocessor-based approach to 
~ chac have So ~ar come tO tight. 
I. BACKC2OU:'D ;'-':D '.'V£2VI':" 
This paper describes preliminary research in 
the design of Bede, a limited-syntax controlled- 
vocabulary ~achine Translation system co run on a 
microprocessor, translacine between English, 
~rench, Cerman and Dutch. Our experimental corpus 
is a car-radio manual. Bede (named after the 7th 
Cencury English tin~uist), is essentially a 
research project: we are noc immediately concerned 
~v~ch commercial apolicacions, though such are 
clearly possible if the research proves fruitful. 
":ork on Bede ac this stage thouRh is primarily 
experimentnl. The aim at the moment \[s co 
investigate the extent to which a microprocessor- 
based ~ system of advanced desi2n is Possible, 
and the limitations that have to be imposed in 
order co achieve .~ ~or;<in~ system. This paper 
'Je~crihes the overall system design snecif~c~Cion 
t) .~n£cn we are currently working. 
~n cite basic design of the system we attempt to 
incorporate as much as possible Features of far,e- 
scale ~ systems ~hac have proved to be desirable 
or effective. Thus. Bede is mul~ilinBual by 
,~csi(zn. alqorithms and linRuistic data are 
striccl~ separated, and the system \[s desiRned in 
~ore o- less independent modules. 
T~n ~\[cron'occssor environment means that 
~:r~r~l ~I" siz~ ~ro ~{~norE,l~E: '4~ta ~cruccures 
169 
both dynamic (created by and maniputated during 
the translation process) and static (dictionaries 
and linguistic rule packages) are constrained co 
be as economical in terms of s¢oraBe space and 
access procedures as possible. Limitations on ~n- 
core and periohera\[ storage are important 
considerations in the system design. 
In large genera\[ purpose ,HT systems, i= is 
necessary to assume that faiture to translate the 
given input correctty is generally not due to 
incorrectly ~ormed input, bu~ to \[nsufficientJy 
elaborated ~ranslacion algorithms. This is 
particularly due to =wo problems: the lexical 
problem of choice of appropriate translation 
equivalents, and the strategic problem of 
e~fec~ive analysis of the wide range of syntactic 
patterns Eound in nacural language. The reduction 
of these problems via ~he notions of" controlled 
vocabu\[ary and restricted syntax seems 
particularly appropriate in the microprocessor 
environment, since the alternative of makin~ a 
system |n~tnitely extendable \[s probably no~ 
feasible, 
Given these constra/nts, it seems feasible to 
achieve cranstacion via an InCerltngua. ~n ~hich 
the canonicat structures from the source lan=ua~e 
are mapped directly onto those of the target 
language(s), avoidin R any language-pair oriented 
'transfer' sta~e. Translation thus cakes place in 
~wu puase~= anaiysls ot source ~ext an~ synthesis 
of target text. 
A. Incorporation of recent desL~n ortncio\[es 
~odern ~ system design can be char~cterLsed hv 
three principles thac have proved Co be desirable 
and effective (Lehmann eta\[, tg~}o:I-\]): each of 
these is adhered co in the desiRn oF Rede. 
Bede Es mutt\[lingual by design: early "!T 
systems were designed with specific lan~uaBe-oatrs 
in mind, and translation algorithms were 
elaborated on this basis. The main conseouence of 
this was that source lan~uaRe analysis ~¢as 
effected within the perspective of the B~ven 
target \[anguaRe, and was therefore often of little 
or no use on the addition into the system of a 
further language (of. ~in~, IORI:12; ~:in~ 
Perschke, 1982:28). 
In Bede, there is a strict separation of 
algorithms and \[inguiscic data: oarlv "T ~y~ccms 
'~ere quite sin~n\[y 'translation nrc~ra:~s', tnd ~nv 
underlying linguistic theory which might have been 
present was inextricably bound up with the program 
itself. This clearly entailed the disadvantage 
that any modification of the system had to be done 
by a skilled programmer (of. Johnson, IgRO:IAO). 
Furthermore, the side-effects of apparently quite 
innocent modifications were often quite far- 
reaching and difficult to trace (see for example 
Boscad, lq82:130), Although this has only 
recently become an issue in HT (e.g. Vauquois, 
1979:I.3; 1981=10), it has of course for a long 
time been standard practice in other areas of 
knowledge-based programming (Newell, 1973; Davis & 
King, 1977). 
The third principle now current in MT and to be 
incorporated in Bede is that the translation 
process should be modular. This approach was a 
feature of the earliest 'second generation' 
systems (of. Vauquois, 1975:33), and is 
characterised by the general notion that any 
complicated computational task is best tackled by 
dividing it up into smaller more or less 
independent sub-casks which communicate only by 
means of a strictly defined interface protocol 
(Aho et al, 1974). This is typically achieved in 
the bit environment by a gross division of the 
translation process into analysis of source 
language and synthesis of target language, 
possibly with an intermediate transfer sca~e (see 
!.D below), with these phases in turn sub-divided, 
for example into morphological, lexical and 
syntactico-semantlc modules. This modularity may 
be reflected both in the linguistic organisation 
of the translation process and in the provision of 
software devices specifically tailored to the 
relevant sub-task (Vauquois, 1975:33). This is 
the case in Bede, where for each sub-task a 
grammar interpreter is provided which has the 
property of being no more powerful than necessary 
for the task in question. This contrasts with the 
approach taken in TAt~-H~c~o (TAUM, Ig73), where a 
single general-purpose device (Colmerauer's (1970) 
'O-Systems') is orovided, with the associated 
disadvantage that for some 'simple' tasks the 
superfluous power of the device means that 
processes are seriously uneconomical. Bede 
incorporates five such 'grammar types' with 
associated individual formalisms and processors: 
these are described in detail in the second half 
of this paper. 
B. The microproce,ssor environment 
!t is in the microprocessor basis that the 
principle interest in this system lies, and, as 
mentioned above, the main concern is the effects 
of the restrictions that the environment imposes. 
Development of the Bede prototype is presently 
caking place on ZRO-based machines which provide 
6Ak bytes of in-core memory and 72Ok bytes of 
peripheral store on two 5-I/~" double-sided 
double-density floppy disks. The intention is 
that any commercial version of Bede would run on 
more powerful processors with larger address 
space, since we feel chat such machines will soon 
rival the nopularity of the less powerful ZRO's as 
the standard desk-cop hardware. Pro~rarzninR so 
far has been in Pascal-" (Sorcim, 197q), a Pascal 
dialect closely resembling UCSD Pascal, but we are 
conscious of the fact that both C (Kernighan & 
Ritchie, 1978) and BCPL (Richards & Whitby- 
Strevens, Ig7g) may be more suitable for some of 
the software elements, and do not rule out 
completing the prototype in a number of languages. 
This adds the burden of designing compatible data- 
structures and interfaces, and we are currently 
investigating the relative merits of these 
languages. Portability and efficiency seem to be 
in conflict here. 
Microprocessor-based MT contrasts sharply with 
the mainframe-based activity, where the 
significance of problems of economy of storage and 
efficiency of programs has decreased in recent 
years. The possibility of introducing an element 
of human interaction with the system (of. Kay, 
Ig80; Melby, 1981) is also highlighted in this 
environment. Contrast systems like SYSTRAN (Toma, 
1977) and GETA (Vauquois, 1975, lq7g; Boiler & 
Nedobejkine, IggO) which work on the principle of 
large-scale processing in batch mode. 
Our experience so far is chat the economy and 
efficiency in data-structure design and in the 
elaboration of interactions between programs and 
data and between different modules is of paramount 
importance. While it is relatively evident thac 
large-scale HT can be simulated in the 
microprocessor environment, the cost in real time 
is tremendous: entirely new design ~nd 
implementation strategies seem co be called for. 
The ancient skills of the programmer that have 
become eroded by the generosity afforded by modern 
mainframe configurations become highly valued in 
this microprocessor application. 
C. Controlled vocabulary and restricted sync@x 
The state of the art of language processing is 
such chat the analysis of a significant range of 
syntactic patterns has been shown to be possible, 
and by means of a number of different approaches. 
Research in this area nowadays is concentrated on 
the treatment of more problematic constructions 
(e.g. Harcus, lqgO). This observation has led us 
tO believe that a degree of success in a small 
scale MT project can be achieved via the notion of 
restricting the complexity of acceptable input, so 
that only constructions that are sure tc ne 
Correctly analysed are permitted. This notion of 
restricted syntax ~ has been tried with some 
success in larger systems (cf. Elliston, IGYn: 
Lawson, 107q:81f; Somers & HcNaught, I9~O:ao~, 
resulting both in more accurate translation, and 
in increased legibility from t~e human point of 
view. AS Elliston points out, the development ef 
strict guidelines for writers leads not only t: 
the use of simpler constructions, but also to =he 
avoidance of potentially ambiguous text. In 
either case, the benefits for ~ are obvious. 
Less obvious however is the acceptability of such 
constraints; yet 'restricted syntax' need noc 
imply 'baby talk', and a reasonably extensive 
range of constructions can be included. 
Just as problems of syntactic analysis ca~: e(. 
alleviated by imposing some degree of contrn~ over 
150 
the syntactic complexity of the input, so the 
corresponding problem of lexical disambiguation 
chat large-scale HT systems are faced with can be 
eased by the notion of controlled vocabulary. A 
major problem for PIT is the choice of appropriate 
translation equivalents at the lexical level, a 
choice often determined by a variety of factors at 
all linguistic levels (syntax, semantics, 
pragmatics). In the field of mulCilingual 
terminology, this problem has been tackled via the 
concept of terminological equivalence (WUster, 
1971): for a given concept in one language, a 
translation in another language is established, 
these being considered by definition to be in one- 
to-one correspondence. In the case of Beds, where 
the subject-matter of the texts to be translated 
is fixed, such an approach for the 'technical 
terms' in the corpus is clearly feasible; the 
notion is extended as far as possible to general 
vocabulary as well. For each concept a single 
term only is permitted, and although the resulting 
style may appear less mature (since the use of 
near synonyms for the sake of variety is not 
permitted), the problems described above are 
somewhat alleviated. Polysemy is noC entirely 
avoidable, but if reduced co a bare minimum, and 
permitted only in specific and acknowledged 
circumstances, the problem becomes more easily 
manageable. 
D. Interlin~ua 
A significant dichotomy in PIT is between the 
'transfer' and 'tnterlingua' approaches. The 
former can be characterised by the use of 
bilingual transfer modules which convert the 
results of the analysis of the source language 
into a representation appropriate for a specific 
target language. This contrasts wlth the 
interlingua avproach in which the result of 
analysis is passed directly co the appropriate 
synthesis module. 
It is beyond the scope of the present paper to 
discuss in detail the relative merits of the two 
approaches (see Vauquois, i975:l&2ff; Hutchins, 
lq78). I~ should however consider soma of the 
major obstacles inherent in the incerlingua 
approach. 
The development of an Interlingua for various 
purposes (noc only translation) has been the 
subject of philosophical debate for some years, 
and proposals for ~T have included the use of 
formalized natural language (e.g. Hel'~uk, Ig7&; 
Andreev, lg67), artificial languages (like 
~soeranco), or various symbolic representations, 
~hecher linear (e.~. BUlcins, I061) or otherwise 
(e.~. "~ilks, 1073). Host of chess approaches are 
problematic however (for a thorough discussion of 
the lncerlingua approach co ~, see Often & Pacak 
(1071) and Barnes (ig83)). Nevertheless, some 
incerlingua-based HT systems have been developed 
co a considerable degree: for example, the 
~renohle team's first attempts at wT cook this 
approach (Veillon, 106R), while the TITUS system 
still in use ac the Ànscicut Textile de France 
(Ducroc. Ig72; Zinge\[, 1~78~ is claimed to be 
(ncerlin~u,l-based. 
151 
It seems that it can be assumed a priori thac 
an entirely language-independent theoretical 
representation of a given text is for all 
practical purposes impossible. A more realistic 
target seems to be a representation in which 
significant syntactic differences between the 
languages in question are neutralized so chat the 
best one can aim for is a languages-specific (sic) 
representation. This approach implies the 
definition of an Interlingua which cakes advantage 
of anything the languages in the system have in 
common, while accomodating their idiosyncrasies. 
This mains chat for • system which involves 
several fairly closely related languages the 
interlinsua approach is at least feasible, on the 
understanding that the introduction of a 
significantly different type of language may 
involve the complete redefinition of the 
Incerlingua (Barnes, 1983). ~rom the point of 
view of Beds, then, the common base of the 
languages involved can be used to great advantage. 
The notion of restricted syntax described above 
can be employed to filter out constructions chac 
cause particular problems for the chosen 
Interlingua representation. 
There remains however the problem of ~he 
representation of lexical items in the 
Interlingua. Theoretical approaches co this 
problem (e.g. Andreev, 1967) seem quite 
unsatisfactory. BuC the notion of controlled 
vocabulary" seems to offer a solution. If a one- 
co-one equivalence of 'technical' terms can be 
achieved, this leaves only a relatively small area 
of vocabulary for which an incerlingual 
representation must be devised. It seems 
reasonable, on a small scale, co treat general 
vocaOuiary tn an enelagous way co technical 
vocabulary, in particular creating lexical items 
in one language that are ambiguous with respect co 
any of the ocher languages as 'homographs'. Their 
'disambiguation' must cake place in Analysis as 
there is no biltgual 'Transfer' phase, and 
Synthesis is purely deterministic. While this 
approach would be quite unsuitable for a larRe- 
scale general purpose HT system, in the present 
context - where the problem can be minimised - ~c 
seems Co be a reasonable approach. 
Our own model for the Bede tnCerlingua has noc 
yet been finalised. We believe this co be an area 
for research and experimentation once the system 
software has been more fully developed. ~ur 
current hypothesis is chat the InterlinRua will 
cake the form of a canonical representation of the 
text in which valency-houndness and (deep) ~e 
will play a significant role. Sentential features 
such as tense and aspect will be capcured by 
'universal' system of values for the languages 
involved. This concepcion of an Interlingua 
clearly falls short of the language-independent 
pivot representation typically envisaged Ccf. 
Boitet & NedobeJklne, 1980:2), but we hope :o 
demonstrate chac it is sufficient for the 
languages in our system, and chat it could be 
adapted without significant difficulties to cater 
for the introduction of other (related) Western 
European languages. We feel chat research in chLs 
area will, when the time comes, be a siEniflcanc 
and valuable by-product of the project as a whole. 
II. DESCRIPTION OF THE SYSTEM DESIGN 
In this second half of the paper we present a 
description of the translation process in Bede, as 
it is currently envisaged. The process is divided 
broadly into two parts, analysis and synthesis, 
the interface between the two being provided by 
the Interlingua. The analysis module uses a 
Chart-like structure (cf. Kaplan, 1973) and a 
series of grammars to produce from the source text 
the Incerlingua tree structure which serves as 
input to synthesis, where it is rearranged into a 
valid surface structure for the target language. 
The 'translation unit' (TU) is taken co be the 
sentence, or equivalent (e.g. section heading, 
title, figure caption). Full details of the rule 
formalisms are given in Somers (Ig81). 
A. Strln~ segmentation 
The TU is first subjected to a two-stage 
string-segmentation and 'lemmatlsation' analysis. 
In the first stage it is compared word by word 
with a 'stop-list' of frequently occurring words 
(mostly function words); words not found in the 
stop-list undergo string-segmentatlon analysis, 
again on a word by word basis. String- 
segmentation rules form a finite-state grammar of 
affix-stripping rules ('A-rules') which handle 
mostly inflectional morphology. The output is a 
Chart with labelled arcs indicating lexical unit 
(LU) and possible interpretatio n o£ the stripped 
affixes, this 'hypothesis' to be confirmed by 
dictionary look-up. By way of example, consider 
(I~, a possible French rule, which takes any word 
ending in -issons (e.g. finissons or h4rissons) 
and constructs an arc on the Chart recording the 
hypothesis that the word is an inflected form of 
an '-it' verb (i.e. finir or *h4rir). 
(I) V + "-ISSONS" ~ V ~ "-IR" 
\[PERS=I & NUM=PLUR & TENSE=PRES & HOOD=INDIC\] 
At the end of dictionary look-up, a temporary 
'sentence dictionary' is created, consisting of 
copies of the dictionary entries for (only) those 
LUs found in the current TU. This is purely an 
efficiency measure. The sentence dictionary may 
of course include entries for homographs which 
will later be rejected. 
B. Structural analysis 
I. 'P-rules' 
The chart then undergoes a two-stage structural 
analysts. In the first stage, context-sensitive 
augmented phrase-structure rules ('P-rules') work 
towards creating a single arc spanning the entire 
TU. Arcs are labelled with appropriate syntactic 
class and syncactico-semantic feature information 
and a trace of the lower arcs which have been 
subsumed from which the parse tree can be simply 
extracted. The trivial P-rule (2) iS provided as 
an examnle. 
(2) <NUM(DET)=NUM(N) & GDR(DET).INT.GDR(N~ r.. ~ > 
DET + N -~ NP 
<GDR(NP):=GDR(N) & NUM(NP 3:=NLvM(N) • 
P-rules consist of 'condition stipulations', a 
'geometry', and 'assignment stipulations'. The 
nodes of the Chart are by default identified by 
the value of the associated variable CLASS, though 
it is also possible to refer to a node by a local 
variable name and test for or assign the value of 
CLASS in the stipulations. Our rule formalisms 
are quite deliberately designed to reflect the 
formalisms of traditional linguistics. 
This formalism allows experimentation with a 
large number of different context-free parsing 
algorithms. We are in fact still experimenting in 
this area. For a similar investigation, though on 
a machine with significantly different time and 
space constraints, see Slocum (1981). 
2. 'T-rules' 
In the second stage of structural analysis, the 
tree structure implied by the labels and traces on 
these arcs is disjoined from the Char~ and 
undergoes general tree-Co-cree-transductions as 
described by 'T-rules', resulting in a single tree 
structure representing the canonical form of the 
TU. 
• The formalism for the T-rules is similar co 
that for the P-rules, except in the geometry part, 
where tree structures rather than arc sequences 
are defined. Consider the necessarily more 
complex (though still simplified) example (3~. 
which regularises a simple English passive. 
(3) < LU(AUX)="BE" & PART(V)=PASTPART & 
LU(PREP)="BY" & CASE(NP{2})=ACE?;T > 
S(NP{I} * AUX - V ÷ NP{2}(PREP . ~) 
s(~P(2}(s) ~ v + ~p{l}) 
<DSF(NP{2}):=DSUJ & VOICE(V):=PASSV & 
DSF(NP{I}:=DOBJ • 
Notice the necessity to 'disamb£Ruate' the two 
NPs via curly-bracketted disamblRuators; the 
possibility of defining a partial geometry via the 
'dummy' symbol ($~; and how the AUX and PREP are 
eliminated in the resulting tree structure. 
Labellings for nodes are copied over by default 
unless specifically suppressed. 
With source-language LUs replaced by unique 
multiiingual-dictionary addresses, this canonical 
representation is the Interlingua which is passed 
for synthesis into the target language(s~. 
C. Synthesis 
Assuming the analysis has been correctly 
performed, synthesis is a relatively straight- 
forward deterministic process. Synthesis 
commences with the application of further T-rules 
which assign new order and structure ~o she 
Interlingua as appropriate. The synthesis T-rules 
for a given language can be viewed as analogues ~f 
the T-rules that are used for analysis of that 
language, though it is unlikely that for syntbes~s 
152 
the analysis rules could be simpLy reversed, 
Once the desired structure has been arrived at, 
the trees undergo a series of context-sensitive 
rules used to assign mainly syntactic features co 
the leaves ('L-rules'), for example for the 
purpose of assigning number and gender concord 
(etc.). The formalism for the L-rules is aglin 
similar to that for the p-rules and T-rules, the 
geOmett'y pert this time definYng a single tree 
structure with no structural modification 
implied. A simple example for German is provided 
here (4). 
(4) <SF(NP)=SUBJ> 
NP(Drr + N) 
<CASE(DET):=NOH & CASE(N):=NOH & 
NI~H(DET):=NUH(NP) & GDR(DET):-GDR(N)> 
The llst of labelled leaves resulting from the 
application of L-rules is passed to morphological 
synthesis (the superior branches are no longer 
needed), where a finite-state grammar of 
morpbographemic and afftxation rules ('H-rules') 
is applied to produce the target string. The 
formalism for H-rules is much less complex than 
the A-rule fomelism, the grammar being again 
straightforwardly deterministic. The only taxing 
requirement of the M-rule formalism (which, at the 
~ime of writing, has not been finalised) is that 
it must permit a wide variety of string 
manipulations to be described, and that it must 
define a transaparent interface with the 
dictionary. A typical rule for French for example 
might consist of stipulations concerning 
information found both on the leaf in question and 
in the dictionary, as in (5). 
(5) leaf info.: CLASS.V; TENSE.PRES; NUH.SING; 
PEgs-3; HOOD=INDIC 
dict. info.: CONJ(V)=IRREG 
assign: Affix "-T" to STEHI(V) 
D. General comments on system design 
The general modularity of the system will have 
been quite evident. A key factor, as mentioned 
above, is that each of these grammars is just 
powerful enough for the cask required of It: thus 
no computing power is 'wasted' at any of the 
intermediate stages. 
At each interface between grammars only a small 
part of the data structures used by the donating 
module is required by the receiving module. The 
'unwanted' data structures are written to 
peripheral store co enable recovery of partial 
s~ructures in the case of failure or 
mistranslation, though automatic backtracking to 
previous modules by the system as such is not 
envisaged as a major component. 
The 'static' data used by the system consist of 
the different sets of l~nguistic rule packages, 
plus ~he dictionary. The system essentially has 
one large mu\[tilingual dictionary from which 
numerous software packages generate various 
subdiccionaries as required either in the 
:rans\[acion process itself, or for lexicographers 
153 
working on the system. Alphabetical or other 
structured language-specific listings can be 
produced, while of course dictionary updating and 
editing packages are also provided. 
The system as a whole can be viewed as a 
collection of Production Systems (PSs) (Newell, 
1973; Davis & King, 1977; see also Ashman (1982) 
on the use of PSs in HT) in the way that the rule 
packages (which, incidentally, as an efficient7 
iI~alute, undergo separate syntax verification and 
'compilation' into interpretable 'code') operate 
on the data structure. The system differs from 
the classical PS setup in distributing its static 
data over two databases: the rule packages and the 
dictionary. The combination of the rule packages 
and the dictionary, the software interfacing 
these, end the rule interpreter can however be 
considered as analgous to the rule interpreter of 
a classical P$. 
IIl. CONCLUSION 
As an experimental research project, Bede 
provides us with an extremely varied range of 
computational linguistics problems, ranging from 
the principally linguistic task of rule-writing, 
to the essentially computational work of software 
tmplen~lncatton, with lexicography and terminology 
playing their part along the way. 
gut we hope too that Bade is more than an 
academic exercise, and that we are making a 
significant contribution to applied Computational 
linguistics research. 
IV. ACKNOWLEDCHENTS 
I present this paper only as spokesman for a 
large group o£ people who have worked, are 
working, or will work on Bede. Therefore I would 
like to thank colleagues and students at C.C.L., 
past, present, and future for their work on the 
project, and in particular Rod Johnson, Jock 
HcNeughc, Pete Whitelock, Kieran ~ilby, Tonz 
Barnes, Paul Bennett and Reverley Ashman for he\[~ 
with ~his write-up. I of course accept 
responsibility for any errors thac slipped through 
that tight net. 

REFERENCES 

Aho, Alfred V., John E. Hopcrofc & Jeffrey B. 
Utlman. The design and analysis of computer 
algorithms. Reading, Hass.: Addison-:;eslev. 
Ig74. 

Andreev, N.D. The intermediary language as the 
focal point of machine translation. In A.D. 
Booth (ed), Hachine Translation, Amsterdam: 
North-Holland, 1967, 1-27. 

Ashman, Beverley D. Production Systems and their 
application to Hachine Transl#,tion~ Transfer 
Report (CCL/UHIST Report No. ~2/01. ~fanchester: 
Centre for Computational Linguistics, University 
of Hanchester Institute of Science and 
Technology, 1982. 

Barnes, Antonia M.N. An investigation into the 
syntactic structures of abstracts, and the 
definition of an 'interlingua' for their 
translation by machine. MSc thesis. Centre for 
Computational Linguistics, University of 
Manchester Institute of Science and Technology, 
1983. 

Boiler, C. & N. NedobeJkine. Russian-French at 
GETA: Outline of the method and derailed example 
(Rapport de Recherche No. 219). Grenoble: GETA, 
1980. 

B~Iting, Rudolph. A double intermediate language 
for Machine Translation. In Allen Kent (ed), 
Information Retrieval and Machine Translation, 
Part 2 (Advances in Documentation and Library 
Science, Volume III), New York: Interscience, 
1961, I139-I144. 

Boscad, Dale A. Quality control procedures in 
modification of the Air Force Russian-English MT 
system. In Veronica Lawson (ed), Practical 
Experience of Machine Translation, Amsterdam: 
North-Holland, 1982, 129-133. 

Colmerauer, Alain. Les syst~mes-Q: ou un 
formalisme pour anal~ser et s~nthdciser des 
phrases sur ordinateur. (Publication interne no. 
43). Moncr4al: Projet de Traduction Automatique 
de l'Universitd de Montr4al, 1970. 

Davis, Randall & Jonathan King. An overview of 
Production Systems. In E.W. Elcock & D. Michie 
(eds), Machine Intelligence Volume 81 Machine 
representation of knowledBe, New York: Halated, 
1977, 300-332. 

Ducroc, J.M. Research for an automatic 
translation s~stem for the diffusion of 
scientific and technical textile documentation 
in English speaking countries: Final report. 
Boulogne-Billancourt: Insticut Textile de 
France, I972. 

Kernighan, Brian W. & Dennis ~I. Ritchi~. ~he C 
programmin K language. Eng|ewood Cli~fs, ~:J: 
Prentice-Hall, 1978. 

King, M. EUROTRA - a European system for machine 
translation. Lebende Sprachen 1981, 26:12-1&. 

King, M. & S. Perschke. EUROTRA and its object- 
ives. Multilin~ua, 1982, 1127-32. 

Lawson, Veronica. Tigers and polar bears, or: 
translating and the computer. The Incorpnrated 
Linguist, 1979, 18181-85. 

Lehmann, Winfred P., Winfield S. Bennett. Jonathan 
Slocum, Howard Smith, Solveig M.V. Pfluger & 
Sandra A. Eveland. The METAL system (RADC-TR- 
80-37&). Austin, TX: Linguistics Research 
Center, University of Texas, 1080. 

Marcus, Mitchell P. A theory of syntactic 
recognition for natural language, Cambridge, MA: 
MIT Press, 1980. 

Melby, Alan K. Translators and machines - can 
they cooperate? META, 1981, 26:23-34. 

Mel'~uk, I.A. Grammatical meanings in 
interlinguas for automatic translation and the 
concept of grammatical meaning. In V. Ju. 
Rozencvejg (ed), Machine Translation and Applied 
Linguistics, Volume I, Frankfurt am Main: 
Athenaion Verlag, 1974, 95-113. 

Newell, A. Production systems: Models of control 
structures. In William G. Chase (ed) - Visual 
information processing, New York: Academic 
Press, 1973, ~63-526. 

Otten, Hichael & Milos G. Pacak. Intermediate 
languages for automatic language processina. In 
Julius T. Tou (edi, Software Engineering: CO~:~ 
Ill, Volume 2, New York: Academic Press, ic-I, 
105-118. 

Ellis(on, John S.C. Computer aided translation: a 
business viewpoint. In Barbara M. Snell (ed) - 
Translatin~ and the computer, Amsterdam: North- 
Holland, 197g, I~0-158. 

Johnson, Rod. Contemporary perspectives in 
machine translation. In Suzanne Hanon & Vigge 
Hj~rneger Pedersen (edsl, Human translation 
machine translation (Noter og Kommentarer 39). 
Odensel Romansk Institut, Odense Universitet, 
lOgO, 13~-1~7. 

Hutchins, W.j. Machine translation and machine 
aided translation. Journal of Documentation, 
1978, 34:119-159. 

Kaplan, Ronald N. A general syntactic processor. 
In Randall Rustin (ed), Natural Language 
Processin~ (Courant Computer Symposium Q~, New 
York: Algnrithmics Press, 1073, 103-2&I. 

Kay, "larcin. The proper place of men and machines 
in language transla\[ion (Report in. CSL-80-ll). 
Palo .\~Co, CA: Xerox, lg~O. 

Richards, Martin & Colin Whitby-Screvens. BCPL - 
the language and its compiler. Camhridze: 
Cambridge University Press, I QTQ . 

S\[ocum, Jonathan. A practical comparison ~f 
parsin R strategies for Machine Translation and 
other Natural Language Processing Purposes. PhF 
dissertation. University of Texas at Austin, 
I981. \[ = Technical Report NL-41, Department of 
Computer Sciences, The University of Texas, 
Austin, TX.\] 

Somers, H.L. Bede - the CCL,/I~IIST Machine 
Translation system: Rule-writinE formalism '3rd 
revision) (CCL/~IIST Report Xo. 81'5'. 
Manchester: Centre for Computational 
Linguistics, University of Manchester £nst~cute 
of Science and Technology, 1981. 

Somers, H.L. & d. HcNaught. The translator as 
computer user. The incorporated Lin~.uist, IO~, 
1Q:&g-53. 

Sorcim. Pa__~sca_I/H u____se.r.'.sr~.fere.ns.e manua.\[ _. :'alnur. 
Creek, C.%: Digic.~l '.lar!<ecing, 1°79. 

TA\[/~. Le sysr~me de craduction aucoma~que de 
l'Universit~ de Montreal (TA(Df). HETA0 1q73, 
la :227-2~O. 

Toma, P,P. SYSTRAN as a multilingual ~achtne 
Translation system. In Commission of the 
European Communities, Third ~uropean ConRress on 
Information Systems and Plecwor~s=..Overcomtn~ the 
tan~uaRe barrier, Volume 1, HUnchen: Ver\[ag 
Dokumencac~on, 1977, 569-581. 

Vauquois, Bernard. La craduction automaclque 
Grenoble (Documents de L/nguiscique Quantitative 
2~), Paris: Dunod, 1975. 

Vauquots, B. Aspects of mechanical translation in 
1079 (Conference for Japan I~ Scientific 
Program). Grenoble: Groupe d'Ecudes pour la 
Traduction Aucomatique, 1979o 

Vauquois, Bernard. L'tnformatique au service de 
la ¢raduccion. ~ETA, 1981. 26:8-17. 

VeiZZon, C. Description du Iangage pivot du 
sysCEme de craduction automatique du C.E.T.A. 
?.A. \[nformations, 1068, 1:B-17. 

WtIks, Yorick. An Artificial Intelligence 
approach co Machine Translation. In Ro~er C. 
Schank & ~enneth ~ark Colby (eds), Computer 
models of chou~ht and language, San Francisco: 
Freeman, lq73, 114-151. 

~¢Oscer, ~uRen. Begriffs- und ThemakIasstfik- 
acionen: Uncerschiede in threm ~esen und in 
threr Anwendung. ~achrtchcen fur Dokumenc- 
aclon, t971, 22:qR-IO~. 

Zin~e\[, Hermann-losef. Experiences with TITUS II. 
Tn~ernatinna\[ Classlfication, Ig7a, 5:33-37. 
