CI%X T~'EI~ : a translation: syst~ fo~' 
agricultural market ~epo~:'t s 
Pierre \]~ sabelle 
Marc l)ymetman 
Elliott Macklovit ch 
Centre Canadien de Recherches sur 
\] ~Informatisation du Travail 
157.5 Boul° Chomedey 
Laval ~ Quebec 
CANADA }YTV 2X2 
ABS'II~CT 
The CRFlq'I~R system is being developed to translate agricultural 
market reports between English and French. It is based on a 
transfer mc~lel, and designed to be reversible. The source and 
target language texts are described by means of: a) a surface 
syntactic representation consisting of a tree annotated with feature 
structures, built by an extraposition grammar; and b) a semmttic 
representation exhibiting predicate argument structures and 
constrained by type checking, built in parallel with the syntactic 
stn~cttn'e in compositional fashion. CRrVIqERS's implementation 
is still incomplete, but results obtainext so far are promising. 
"IOPIC AREAS: Machine translation, Logic Cn'ammars. 
1. Our al)pro~tch to the translation problem 
We are currently developing a translation system with two 
main objectives in mind: (a) to effectively translate from English to 
French (and conversely) a real life corpus of texts in a restricted 
sublanguage <Kittredge & Lehrberger, 1982>; and (b) to provide a 
testbed for a theoreticaUy motivated translation model: insofar as 
possible, design choices should integrate recent advances in 
linguistic and semantic theory. 
The corpus that we are using for our current 
experimentation is comprised of weekly reports produced by the 
Canadian Delmrtment of Agriculture, describing the situation of the 
livestock and meat trade market in the different Canadian provinces. 
The following excerpts provide a short sample of the language of 
these reports: 
Imports of slaughter cattle from the United States lm't week 
dropped 62% compared to tire previous week, totalling 334 steers 
and 50 heifer,< 
lit semaine dernidre, les importations de bovins d'atmttage 
out chutg de 62% en regard de la semaine pr~c~dente, totalisant 334 
bouvillom et 50 taures. 
In te~ ms of its general structure, our translation model may 
be viewed as being composed of three abstract relations: 
(i) the source analysis/synthesis relation: 
anasynt s(TS, SurfSyn S, Sent S) 
which defines a set of welloformed triples, where T. S is a scarce 
language text, SnffSy~t.S and and Sem_S are mspcclivcly a surface 
syntactic strnctaru and a semantic structure for this text, both being 
source language dependent; 
(ii) the target analysis\]synthesis relation: 
attasyntt(T_T, SurfSyn_T, Sem T) 
which is the attalogue of anasynt_ s tbr the target hmgtmge; and 
(iii) the tnmsfe, r relation: 
tr(Sent.S, Sere_T) 
which defines a set of couples, whore Seres and ScmT arc 
respectively source and target senmntic structures that are 
considered to be translationally equivalent. 
The anasynt_s and anasynt t relations arc formally and 
computatioually described using the-framework of extraposition 
grammars <Pereim 1981>, while the ~'r relation is defined through a 
set of definite clauses. An important featm'c of our approach is that 
each of these anasynt relations is in fact reversible (cf the 
reversibility condition of <Landsbergen 1987>). Practically 
speaking, this means that a single system is usable for both English 
to French and French to English translation. From a theoretical 
viewpoint, reversibility is a strong criterion of linguistic adequacy 
for a grammar. Typical existing parsers are based on granmmrs that 
(sometimes grossly) overgenerate: the grammar writer assumcs a 
high degree of well-tonnedness in the input text. Conversely, 
typical existing generators tend to undergenerate: the grammar 
writer makes arbritrary choices in the parapt!rasc system of the 
language. A reversible grammar is of necessity closer to 
observational adequacy. 
2. Representations 
The CRITTER system assigns each textual unit a 
representation that describes both its form (graphological, 
morphological, syntactic) and its semantic content. 
2.1 Syntactic Representations 
The syntactic representation associated with a textual object 
is a fairly standard surface structnre tree which may include traces 
in places where a (long-distance) dependency holds between some 
displaced phrase and a gap. Since we adopt a monostratal view of 
syntax, no other level of syntactic representation is tn'ovided for. 
The representation scheme is based on a variant of the feature 
structure approach (Sag & al., 1986). Each node of the sm fi~ce tree 
is represented as a feature structure which includes, among others, 
cat ~md daughters attributes. 
Using familiar tree notation, our current grammars would 
assign sentence (2.a) the representation (2b): 
(2.a) Last week, hog prices in Saskatchewan increased 5% at 
$69.00. 
2gl 
(2.b) 
odv_p 
adp 
last tueek' 
$ 
np <NI3R: plur,...:> up 
P 
hog 
i <NI31R: plur, 
TENSE: pasl, 8UI\]CKf: ..:> 
~.'~,_ 
.,:NBR: plur, 
TE N.qE: pasl, SO.AT..,> ,~ ~"~-~'~"-b. 
illeas_p 
" 1 nP <NBR: plur, <TYPE: proNr..:> TENSE: past, $UBCAT: ..:> n 
I IYPE: proNr,...> peicenl prices in Saskatchewan 
increased 5% 
PP 
p me~s_p 
price 
This structure is more or less in line with current syntactic theories. 
Note that it reflects a three-level X-bar convention. Occasionally, 
idiosyncratic features are adopted so as to account for the 
peculiarities of the sublanguage we are dealing with. This is the 
case for the complements measw and pp under v', which do not 
correspond to the usual subcategorization pattern for the verb 
'increase'. 
2.2 Semantic representations 
Formally, our semantic representations are trees -- or more 
exactly, directed acyclic graphs, for structure-sharing is allowed in 
cases of coreference -- in which nodes are labelled with semantic 
units that often, but need not, correspond to the lexemes of the 
language represented. We introduce abstract semantic units to 
account for some lexical gaps, morphologically marked semantic 
notions, etc. The arcs are labelled either by argument numbers (1), 
(2), (3) ..... or by "inverse" argument numbers (inv-1), (inv-2) .... 
The interpretation of this notation is made easier if one 
considers as an example the semantic structure in Figure (a), which 
is associated with sentence (2.a): "Last week, hog prices in 
Saskatchewan increased 5% at $69": 
increase 
(inv-l) 13) 
11) 695 
lastweek ~ 5% 
price. 
{i) (inv-l) 
hog At 
I 
(2) 
saskatchewan 
Figure (a) 
262 
In this structure: 
- 'At', '5%', '695' are abstract semantic units; 
- 'lastweek' is treated as a single unit, which is justified by the 
fact that it plays the role of a frozen indexical in our sublanguage 
(as 'yesterday' does in the standard language); 
- the (1), (2), (3) labels correspond to argument positions, either 
of predicates (like 'increase', treated as; a 3-place predicate) or of 
functions (such as 'price', which takes a commodity as first 
argumen0; 
the (inv- 1) labels correspond to "inverted" argument relations, 
which implies that 'increase' is in first argument position relative to 
'lastweek', and that 'price' is in first argument position relative to 
'At', ('Saskatchewan' being the second argument of 'At'). 
Labels of the "inv" kind are a notational device which 
makes it possible to simultaneously read two representational levels 
off a single semantic structure: a first level which expresses 
predicate-argument relations; and a second level which is 
reminiscent of the subordination of syntactic groups. Thus 
'lastweek' is a syntactic dependent of 'increased', and 'in 
Saskatchewan' is a syntactic dependent of 'prices'. There are two 
reasons for choosing to reflect subordination in the semantic 
structure: first, we want to maintain a treelike character for the 
semantic structure (unique root, no cycles). This is technically 
related to the fact that transfer o'ucially depends on a root-to-leaves 
recursive traversal of semantic structures. Second and much more 
important is the fact that subordination does have semantical 
import, although in a way which is not eun'ently very well 
understood. 
Semantic structures have to obey a well-formedness 
criterion, which consists of the checking of semantic type 
agreement between a predicate (or functional) node in the stractnr(~ 
and its argument nodes. Defining semantic well-formedness 
involves a semantic lexicon, a semantic type subsumption 
hierarchy, and semantic well-Jbrmedness rules. These ax~ briefly 
described in sections 3.3 and 4.3 below. 
3. ~Ilae lexicon 
CRITI'ER's lexical component is made up of a basic 
dictionary of morpho-syntactie lexieal units; a rule component 
which extends the morpho-syntactic dietionmy; and a dictionea'y of 
semantic-level units. 
3,1 The morgho-syntacfic dietionm-y 
This dictionary lists lexical items in citation form and 
assigns them morphological and syntactic properties. It is also 
resptmsible for effecthlg the trmpping of these lexical milts onto the 
semanfiedevel units, whose properties are described iu a sepm~ate 
dictionary. Moq,hological p;operties include an inflectional class 
and an indication of any morpho-.synctactic idiosyncrasies. 
Syntactic W~Perties ~tft~ etnN~rised of a subcategorizationfra~tw mid 
a collection of syntactic feattu'es. The subcategorization frame of a 
lexical head describes the number and syntactic type of the phrases 
governed by that head° These fiames refer directly to positions in 
serrate s~ructm'e~ sim;c this is the only level of syntactic 
representation admitted in our systenL Vcrbs, for example, can he 
nlarked fbr a maximum of three positions: a subject and up to two 
eo/nplemeilt:;. 
The mapping onto sematltic units is effected by associating 
each lexi(:a) ~mtry wilh a semantic schema. This schema is made up 
~J a semantic refit (represented as a \[imcto:\[' with a fixed arity), and 
a~ i~Micatio~ of the ~elatiom;hip of the arguments to lhe lexica\] 
unit's syntae,i(: dependents. 
hi each lcxical cnhy, this complex of morphological, 
syatactie md setnantie :infi)rmation is specified as a feature 
sttuettue. 'it'~fis feature st~uetmg is cucoded as a Prolog term that 
wc deserxbe indirectly, by means of predicates which access the 
relevam athibute values. For example, the syntactic and semantic 
properties el the verb 'promise' coukt be represented as a term T, 
described as \[bllows: 
(3.a) 
eitat,on .tbnn(T, promise), 
subcat(T, \[NPI, NP2, VCOMP\]), 
cat(NPl, up), 
sere form(NPl, A), 
cat(N P.2, up), 
sere fonn(NP2, B)~ 
cat(VCOMP, vp)0 
vfo~:J n(VCOMP, infinitive), 
sere .folm(VCOMP, C), 
con|yol(VCOMP, NPI)~ 
sere..term(T, promise'(A~ B, C)) 
The predicates citadon form, ca6subcat and sem form 
simply acce:;:; the value of an attribute of the same name in the term 
T. The subc:q attribute h~Ls a value of the list type, as i~, <Poll,'u'd & 
Sag (1988):~ Syntactic ~ules will unify the elements of this list 
with the complements of the lexical head, thereby entbrcing the 
appropriate ~uhcategorization restrictions (e.g. the "cat" of the 
sceond complement of 'promise' is vp). 
'\['ak~ n together the subcat and sen(form attributes account 
fi)r an essential part of the syntax/semantics mapping. According to 
the description given for 'pronrise', the semantic objects associated 
with the sut~iect, the object and the infinitival complement are 
respectively mapped onto the first, second and third argument of 
the semantic predicate promise'. 
All Ihe resources of clausal logic can be invoked to enforce 
complex relationships between sevenfl feature structures. For 
example, the predicate conCrol used in (3.a) above is defined in 
such a way a'.; to ensur~ that the sere. jbrm and agree values of the 
controller mttch those of the subject slot in the subcat of the 
commlled w:~ b phrase: 
(3.b) 
com¢~fi(CONTROLI.EE, CONTROL\[,ER) :- 
:~ubcat(CONTROIA,EE, \]IS UJ, X, Y\]), 
:~em ..~brol(SUJ~ SF), 
:~em form(CONTROLLER, SF), 
agree(SUJ, AG), 
agrce(CONTI~_OLLER, AG). 
The ¢'ont~ol statement of (3.a) will thereby ensure that the 
.~mbject of 'p~omise' and the understood subject of its infinitival 
eomt)lemcnt me the same ~ntity. 
3.2 Morphological and lexical roles 
The rnorpho-syntactie dictionary is extended by three sets 
of rules that handle inflection, derivation and lexical 
transformations. Our description of French inflection is based on 
the work of Bourheau & Pinard (1986), which provides an 
exhaustive speeifieation of the inflectional properties of more than 
50,000 French lexieal items. We have also developed a parallel 
description for English inflection. 
We currently employ a rule-based treatment of derivafional 
morphology only for the most productive classes, such as 
comparative and superlative adjectives, '-ly' adverbs, etc. On the 
other hand, we make extensive use of lexieal transformations to 
handle phenomena such as passivization, subject-to-subject and 
subject-to~object raising, intransitivation, dative-shift, etc. Given 
the scheme described above for lexical subcategorization, most 
lexical transformations can be seen as simply altering the 
subcategorization pattern of the lexical entry. 
For example, given the lexical specification (3.c) for the 
object-raising verb 'believe', ride (3.d) has the effect of generating 
the two "virtual" dictionm3r entries (3.e) and (3.0. 
(3.c) 
diet(T) :- 
citation form(T, believe), 
suj to obj_raising_verb(T,A,B), 
sere folm(T, believe(A,B)). 
(3.d) 
subj to_obj raising yerb(T,A,B) :- 
% standard t0rm 
subcat(T, \[NP, S, \[ \]\]), 
cat(NP, np), 
sem_forIn(NP, A), 
cat(S, sbar), 
complementizer(S, that), 
sem form(S, B). 
subj._to obj raising_.verb(T,A,B) :~ 
% raising form 
subeat(T, \[NP1, NP2, VCOMP\]), 
cat(NP1, up), 
sere form(NP1, A), 
cat(NP2, np), 
sere form(VCOMP, B), 
cat(VCOMP, vp), 
form(VCOMP, infinitive), 
control(VCOMP, NP2). 
(3.e) 
(3.13 
citation_form(T, believe), 
subeat(T, \[NP, S, \[ \]\]), 
cat(NP, np), 
sere form(NP, A), eat(S, 
sbar), 
complementizer(S, that), 
sere form(S, B), 
sern .form(T, believe(A,B)). 
"Tom believes that Bill is dishonest." 
citation_form(T, believe), 
subcat(T, \[NP1, NP2, VCOMP\]), 
eat(NPl, up), 
sere fonn(NP 1, A), 
cat(NP2, up), 
sem_form(VCOMP, B), 
eat(VCOMP, vp), 
form(VCOMP, infinitive), 
eoutrol(VCOMP, NP2), 
sere form(T, believe(A,B)). 
"Tom believes Bill to be dishonest." 
265 
3.3 The semantic lexicon 
The semantic lexicon defines a set of semantic units for 
each language (whether directly realized by a lexeme or more 
abstract); describes a subsumption hierarchy of semantic types ( a 
partial order of types <Sowa, 1983>); and associates with each 
semantic unit SU an initial semantic type, this having the 
consequence that SU belongs implicitly to all higher types in the 
hierarchy. The semantic lexicon also defines a set of validating 
predicate-argument schemas, of which valid predicate-argument 
structures have to be instances. An example of such a schema is: 
MOVEMENT( MEASURE-FUNCTION, INCREMENT, MEASURE) 
where MOVEMENT, MEASURE-FUNCTION, INCREMENT and 
MEASURE are semantic types. 
The use of the semantic lexicon to test semantic structure 
well-formedness is briefly explained in section 4.3. 
4. The Grammars 
4.1 Syntactic Rules 
CRITTER's grammars assign textual units a feature 
structure describing both their syntactic form and semantic content. 
As an example, consider rule (4.a): 
(4.a) 
vbar(VBAR) --> 
vb(VB), 
complement(CO 1), 
complement(CO2), 
{ cat(VBAR, vbar), 
subcat(VB, \[SUJ, CO1, CO2\]), 
head of(VBAR, VB)}.. 
The constituent vbar is expanded as a verb and two 
possible complements. 
Generally speaking, a complement can be any of a wide 
range of phrases: 
(4.b) 
complement(\[\]) -> \[\]. 
complement(NP) -> np(NP). 
complement(PP) -> pp(PP). 
complement(VCOMP) -> vp(VCOMP). 
Most of the syntactic rules that we use are, like (4.a), 
based on the simple context-free skeleton of definite clause 
grammars, with the same augmentation mechanisms: non-terminals 
have arguments and additional PROLOG goals (enclosed in braces) 
can be stated. The non-terminals in our rules are uniformly 
assigned a single argument, whose content is a feature structure, 
and the PROLOG goals are used to state mutual constraints 
between these feature structures. 
In example (4.a), the two complements of the verb are 
unified with the second and third elements of the subcat list of the 
verb, thereby enforcing its lexical subcategorization requirements. 
The head_of predicate is defined so as to unify the head features of' 
the lexical head with those of the larger verb phrase. Since 
sem_form is a head feature, the lexical value of sem_form for the 
verb will be assigned to the verb phrase. In the process, arguments 
of the semantic predicate associated with the verb will become 
instantiated to the semantic objects associated with the complements 
of the verb. 
In older to deal with certain more complex syntactic 
phenomena, such as unbounded dependencies, we take advantage 
of the special facilities built into the extraposition grammar 
formalism. 
4.2 Syntactic processing 
Although the format of our grammatical rules closely 
resembles the format of definite clause grammars (DCGs) and 
extraposition grammars (XGs), there are some important 
differences. 
Because of their direct relationship with clausal logic, 
DCGs and XGs have two distinct interpretations: on the one hand, 
a declarative interpretation in which they can be viewed as defining 
a relation between strings and structural descriptions; and on the 
other hand, a procedural interpretation in which they may be 
viewed indifferently as parsers or synthesizers. 
However, given the standard compilers for these 
formalisms, the procedural interpretation of any given set of rules 
can rarely be used for both analysis and synthesis tasks. For 
example, any DCG contaiuing:left-recursive rules will produce 
infinite loops when applied to analysis tasks, although the same 
grammar may well be suitable for synthesis tasks. Moreover, in 
order to obtain reasonably efficient parsers and synthesizers, it is 
necessary to control the order in which goals are called in each 
mode. 
Our solution to these problems is to retain the use of DCG- 
like rules which have a well-defined declarative semantics; 
however, we enrich these rules with control annotations which, 
while not affecting their semantics, provide a rule compiler with the 
information needed to produce both an analysis-oriented and a 
synthesis-oriented version of the rule. Left-recursion is elinfinated 
in the analysis version, and both versions typically display a 
different ordering of the goals. The result is that we can actually 
derive fairly efficient parsers and synthesizers from one and the 
same grammar. 
For furtber details on this double-compilation approach, 
see Dymetman & Isabelle (1988). 
~.3 Checking of semantic well-formedness 
In order for a semantic structure built by this 
compositional process to be accepted as valid, it must pass a 
semantic well-formedness check, which involves semantic 
constraints and the type subsumption hierarchy. 
This check can be briefly described as follows: for each 
predicative (or functional) node pn in the semantie structure, having 
an1, an2.., as argument nodes, one tries to find a validating schema 
(see §3.3) PT(AT1,AT2,....) such that PT is a type subsuming pn, 
AT1 a type subsuming anl, AT2 a type subsuming an2 .... 
For instance, given the semantic slracttne of section 2.2~ 
partially annotating it with the types of each node yields (4.c). 
264 
(4.c) 
increase {MOVEMENT,EVENT} 
(inv-l) (31 
(i) ~"~'"~ 9 * (PRICE-MEASURE, MEASURE ) lastweek ~ 5% 
{PERCENTAGE, INCREMENT) { WEEK, TIME-POINT) p~'ice 
/ ~{PRICE, MEASURE-FUNCTION} 
(i) (inv-l) 
hog At { LOCATIVE, STATE ) 
(COMMODITY} I (2) 
saska~tcheLvan 
(MARKET, LOCATION} 
The checking of this structure then involves looking for: 
- a validating schema for 'lastweek', in this case the schema 
TIME-POINT(EVENT) 
- a validating schema for 'increase', in this case the schema 
MOVEMENT(MEASURE-FUNCTION,INCREMENT,MEASURE) 
- a validating schema for 'At', in this case the schema 
AT(PRICE,MARKET) 
5. Transfer 
As we have seen, the transfer component implements a 
relation between two language-dependent semantic structures. The 
decision to restrict the input and output of transfer to such semantic 
structures is motivated by a number of considerations. Pre- 
theoretically, the very notion of translation implies a linguistic 
reformulati(m which preserves essential meaning. Such abstract 
intermediate structures also have the practical advantage of 
simplifying the transfer component. That is to say, we assume that 
the analysis component is powerful enough to neutralize certain 
source language transformations and that a full-fledged synthesis 
component can take care of such details of target language 
realization ~ governed prepositions. 
The transfer component itself is essentially lexieal, with all 
relevant knowledge expressed in a transfer lexicon, a sample of which appears in (5.a) : 
(5.a) 
(i) eat <-> manger. 
(ii) miss( 1: X, 2: Y ) <-> manque*'( 1: Y', 2: X' ). 
(iii) walk( inv-l: acress( 2: X ) ) 
<-> traverser( 2: X', inv-l: $manner (2: apied) ). 
entry (i) is straightforward; 
entry 0i) expresses an "argument conversion" : john misses mary <=> 
mary manque d john; 
entry (iai) expresse s a more complex corresl~ondanee: john walks across 
the str, n:t <=> john traverse la rue d pied 
This lexicon is compiled into a set of Prolog clauses. The 
transfer algorithm then performs a simultaneous reeursive root-to- 
leaves travta'sal of source and target semantic structures, making 
use of these clauses to maintain translational equivalence of the 
source and target structures. Practically speaking, the result is that 
when translating from English to French, for example, as the 
transfer algorithm traverses the English semantie structure, the 
Freneh semantic structure is constructed in parallel, by progressive 
instontiations. 
In this way, the transfer process may effect certain 
restructurings, but these are lexically triggered: we do not foresee 
the need for an independent structural transfer component, as in 
ARIANE-78 for example <c.f. Boitet & Nedobejkine, 1981>. 
6. Conclusion 
CRITrER is currently being implemented in QUINTUS 
Prolog on SUN-3 workstations. At the time of writing, the status 
oftbe prototype is as follows: 
- morphological descriptions for both English and French 
are running, including exhaustive descriptions of the inflectional 
sysU,'ms; the dictionaries include approximately 500 lexemes in each 
language, and are expected to go beyond the 1000 mark; 
- syntactic descriptions already cover a significant part of 
the constructions found in the sublanguage, although much work 
remains to be done to deal adequately with ellipsis, complex 
coordination, etc.; furthennore, the grammars of English and 
French are actually used in a reversible manner, although at the time 
of writing, they tend to overgenerate in the synthesis mode; 
- a simple version of type-checking has been implemented, 
but work remains to be done on defining an adequate hierarchy of scmantie types; 
- an initial implementation of the transfer component 
(dictionary and programs) is under way and the first translations 
should be produced within a few weeks. 
7. Acknowledgments 
The authors wish to thank Lucie Cusson, Bruno Godbout, 
Francois Perreault and Michel Simard, who have made important 
contributions to the work reported here. 
265 

REFERENCES 

Boitet, C. & Nedobejkine, N: (1981) "Recent developments in 
Russian-French machine translation at Grenoble". In: 19, 199-271; 

Bourbeau, L. & Pinard, F. (1986). Dictionnaire Micro- 
informafis~ du francai~. Montreal, Canada. 

Dymetman, M. & Isabelle, P. (1988). "Reversible Logic 
Grammars for Machine translation". In 
Internati0ns1 Conference on Thgor~tical 8nd 
Methodological Issues in Machine Translation of No~tral Lan.L.~Eg!l.0~, Carnegie-Mellon University. 

Gazdar, G., Klein, E., Pullum, G., Sag I. (1985) Generalized Phrase Structure G..rammar. Oxford: Blackwell. 

Isabelle, P. & Mackloviteh, E. (1986) "Transfer and MT 
Modularity". In Proceedings of Coling 86, Bonn, FRG. 

Kittredge, R. & Lehrberger, J. (1982) eds.: Sublanguage: 
studies, of language in restricted semantic do-mains. Berlin: de Gruyter. 

Landsbergen, J. (1987) Montague Grammar and Machine 
Translation. Philips Research M.S. 14.026, Eindhoven, The Netherlands. 

Pereira, F. (1981) "Extraposition Grammars". In: 
Computational Linguistics 7.4, 243-256. 

Pollard, C., Sag, I. (1988) Information-Based Syntax and 
Semantics, vol, 1: Fundamentals, CSLI lecture Notes no 
13, CSLI, Stanford University. 

Sag I., Kaplan R., Karttunen L., Kay M., Pollard C., Shieber 
S., Zaenen A. (1986) "Unification and Grammatical 
Theory", Proceedings of the West Coast Conference on Formal Linguistics, Stanford Linguistics Association, 
Stanford Un{versity. 

Sowa, J. (1983) Coneeotual Structures: Information Processing 
in Mind ond M~chine, Reading, Mass: Addison-Wesley. 
