Structured Meanings in Computational Linguistics 
Kees van Deernter 
Institute for Perception Research 
P.O.Box 5"13, 5600 MB Eindhoven 
The Netherlands. 
22 March 1990 
1 Introduction 
Many natural language processing systems em- 
ploy truth conditional knowledge representations (%- 
ret)resentations' , etc.) to represent meanings of nat- 
arm language expressions. T-representations have 
their strong and their weak sides. A strong side is 
logic: a relation of logical consequence can be de-. 
fined between such representations. A weak side is 
expressive power: the capacity of t-representations 
to convey the subtleties of natural language is lim- 
ited. For instance, let SL be a sentence that is true 
on purely logical grounds; then it is predicted ttmt 
any sentence S is synonymous with "S and SL". This 
deficiency comes out clearest in propositional atti- 
tude constructions, i.e. constructions of the form 'x 
V that S'; where V is an epistemic verb ('knows', 
~believes') and S a sentence. Truth conditional ac- 
counts of nleaning (including intensional ones such 
an \[Montague 1974\]) predict wrongly ~hat anybody 
who knows that S is bound to also know that :'S and 
SL", since t;he two sentences are t-indistinguishable 
(\[Peters and Saarinen 1982\]). The same lack of ex- 
pressive power dooms, for example, automatic trans- 
lation on the basis of t-representations to failure: t- 
representations contain only information that is rele- 
vant for the truth or falsity of a sentence, dismissing 
all other information, such an mood, topic-con:merit 
,,,tructure, etc. (\[van Deemter-89\]). 
This paper investigates a remedy for the expres- 
sive poverty of t-representations, namely to let syn- 
tactic structure participate in the notion of mean- 
ing. This old and persistent idea (\[Carnap 1947}), 
\[Lewis 1972\], \[Cresswell 1985\]) was recently taken up 
in the Rosetta automatic translation program. We 
-will show how Rosetta's concept of meaning over- 
comes some weaknesses of earlier proposals and how 
a relation of logical consequence can be defined on 
t, op of it. 
2 An Old Idea: Structured 
Meanings 
It has been argued that no theory of meaning that 
is con:positional and truth conditional can deal with 
propoMtional attitudes. For, whenever two expres- 
sions with different conlpositional (syntactic) struc- 
tures boil down -- via the semantic operations con- 
netted with their respective structures -- to the same 
meaning, a person can fail to see the equivalence', he 
can carry out the operations in the wrong way, or 
t, oo slowly (\[Cresswell 19851). Cresswell and others 
have concluded that syntactic structure has to take 
part in meaning representations: t-indistinguishable 
expressions may still have different meanings, due to 
differences in syntactic structure. D.Lewis, for in- 
stance, used semantically interpreted phrase markers 
(roughly: syntax trees with logical formulas attached 
to the nodes) as meanings for natural language ex- 
pressions (\[Lewis 1972\]). However, this leads to an 
extremely strict notion of synonymy: 
Perhaps we would cut thereby meanings too 
finely. For instance, we will be unuble to ~gree 
with someone who says that a double negation 
has the same meaning as the corresponding af- 
firmative. (\[Lewis 19721) 
Also, no relation of logical consequence has seen the 
light for any notion of structured meaning. In the 
sequel we will deal with the notion of meaning in- 
herent in the Rosetta automatic translation project 
(e.g. \[Landsbergen 19821, \[Landsbergen 1985\], 
\[Landsbergen 1987\], or \[de Jong and Appelo 1987\]). 
This notion of meaning -- essentially an elaboration 
of the one proposed by Lewis -- allows a suitably 
weaker notion of synonymy, and can also be provided 
with a notion of logical consequence. Thus, some of 
the weak sides of older "structured meanings" pro° 
posals are compensated for. 
85 
3 Structured Meanings in 
Rosetta 
Rosetta uses a variant of Montague grammar 
(\[Montague 1974\]), in which each syntax rule has a 
semantic counterpart. Each node in the syntactic 
derivation tree (D-tree) for a sentence is associated 
with a semantic rule. Thus, each D-tree is associ- 
ated with a semantic tree (M-tree), whose nodes are 
semantic rules and whose leaves are non-logical con- 
stants. By applying the semantic rules to their ar- 
guments, a logical formula can be calculated for each 
node in the M-tree that stands for its truthcondi- 
tional me~ning. We will call this fornmla the cor- 
responding formula of the node. Now in Rosetta, 
a sentence meaning is not, as in \[Montague 1974\], 
identified with the formula that corresponds to the 
top node of an M-tree, but with the entire tree. 
Thus, syntactic structure in Rosetta becomes a part 
of meaning in much the same way as proposed by 
D.Lewis (see above). For instance, the English 
Noun Phrase 'ItMian giri' and its Spanish equivalent 
'muchach~ Italiana' might, if we simplify, both be 
represented by the same M-tree: 
M1 
/ \ 
M2 M3 
where M2 stands for the sets of italians, M3 stands 
for the set of girls and M1 stands for the operation of 
set intersection. M1 is expressed by different syntax 
rules in English and Spanish: 
REIn1: If a is an Adjective and fl is a Noun, 
then o~fl is a Nom. 
RjSV": If c~ is an Adjective and fl is ~L Noun, 
then fla t is a Nora, where a ~ is the adjective 
a, adjusted to number and gender of the 
llOUn ft. 
By mapping both of these rules onto M1, the two NPs 
are designated as translations of each other. Now M- 
trees in Rosetta are used as vehicles for inter-lingual 
translation, but we will view them as "general pur- 
pose" representations for the meanings of natural lan- 
guage expressions. Viewed in this way, the following 
definition of synonymy (notation: '=') between D- 
trees (and, derivatively, for natural language expres- 
sions) is forthcoming: 
Synonymy (first version): D1 ~ D2 ¢*ae/ 
- D1 = Rl(al,...,a,~) and D2 = 
R2(bl, ..., b,,), where Rt and R~ snap onto 
the same meaning rule, und where it holds 
for alll<i<nthat ai ~bi, or 
- D1 and D2 are basic expressions which 
map onto the same basic meaning. 
(Definition of synonynly for M-trees, at this stage, 
comes down to simple equality of the trees.) This 
notion of meaning takes syntactic structure into ac- 
count, but does not "cut meanings too finely", since 
any two linguistic constructions can be designated 
as synonymous. For instunce, Lewis' "double nega- 
tion" problem can be countered as follows: the syn- 
tax rules of double negation (Raou~ler~e~) and plain 
affirmation (R~Hi .... ) can be mapped onto one and 
the same meaning rule, if the grammar writer decides 
that they are semantically indistinguishable. Alter- 
natively, the semantic relation between a D-tree of 
the form 
R-negation 
\[ 
R-negation 
I 
D 
and its constituent tree D nl~y be accounted for if 
both trees are snapped onto one and the same M-tree. 
Effectively, this would come down to an extension of 
Rosetta with "rules of synonynly" for entire trees, 
rather than for individual syntax rules. 
4 Inference with M-trees 
Arguably, our grip on the notion of nleaning is in- 
complete if only the limiting case of structural equiv- 
alence is dealt with, leaving aside the more general 
case of structural consequence (~v/). Under what 
conditions does, for instance, one belief follow from 
another? Rosetta's isomorphy-based notion of mean- 
ing seems ill-equipped to deal with inference, but we 
claim theft an extrapolation is possible. 
A natural boundary conditions on ~M is logical 
validity: no rule may lead from true premises to a 
false conclusion. Writing '~-' for the relation hold- 
ing between M-trees if the formulas corresponding to 
their top-nodes stand in the relation of logical conse- 
quence, this gives: 
Validity: T,, ~M Tb only if T,, ~ Tt,. 
Given validity as an upper bound, we seek reason- 
able lower bounds on structural inference. It is not 
generally valid to allow that a tree has all its sub- 
trees as structural consequences. (For instance, the 
86 2 
negation of a tree T does not have the subtree T as 
a consequence.) However, a solution can be found if 
we take the dual nature of our concept of meaning 
into account: M-trees combine structural and logical 
information. Therefore, if one tree is a subtree of an- 
other tree, and also a purely logical consequence of 
the bigger tree, then the inference is indisputable; for 
the inference is logically correct and there can be no 
difference in syntactic structure: 
Subtree Principle (1 't version): If (i) T, 
T~, and (ii) T1, is a subtree of T,~ then T,~ 
~M Tb. 
However, we have to exclude as "pathological" cases 
all those situations ill which it is not one and the same 
subtree Tt, that takes care of the logical and the struc- 
tural side: we cannot allow inferences such as the 
following -- where S abbreviates a "paraphrase" of 
S, namely a sentence that is logically, but not struc- 
turally, equivalent to S (see below) --. even though 
they fulfil both conditions of the Subtree Principle: 
(2) -, a (s -, v 
~M St v s2. 
These inferences are not structurally valid, given the 
structural differences between the conclusion and the 
relevant part of the premise. Let an atomic sentential 
fragment (asj) be a sentential M-tree no proper part 
of which is sentential itseff. To be on the safe side, 
we might forbid that "par&phases" of asps from the 
conclusion occur in the premisse: 
Subtree Principle (2 nd version): If (i) T,~ 
Tt, and (ii) Tb is a subtree of T,~ and (iii) 
If TI is an asf that occurs essentially in T,~ 
and T2 is an asf that occurs essentially ill 
Tb, then T\]. is not a paraphrase of T2, then 
Ta t=M T~,, 
where a paraphrase is a logical equivalent that falls 
short of structural equivalence: 
Paraphrase (1 ~t version): T1 is a para- 
phrase of T2 ¢~D,j Tl ~ T2 and T2 ~ Tl 
but none of the two is a subtree of the other. 
The resulting logic is quite uncommon unless stronger 
lower bounds are given. For instance, if (ii) is a nec- 
essary condition, there cannot be any tree T such 
that ~M T. Consequently, the Deduction Theorem 
will not hold. Also, if (iii) is a necessary condition, 
then Conjunction Elimination fails to hold. In fact, 
it holds for all Sl and $2 that SI&S2 ~M $2. To 
remedy this defect, (iii) may be weakened to allow 
logically inessential occurrences of paraphrases: 
Inessential occurrence: An occurrence 
of T in the premisse (conclusion) of an ino 
ference is inessential if the inference goes 
through when T is replaced by an arbitrary 
T' everywhere in the premisse (conclusion). 
For instance, the occurrence of S in (1) and (2) is es- 
sential, but its occurrence in S&S ~/~ S is inessen- 
tial and therefore harmless. As a result of this change, 
a restricted version of Conjunction Elimination holds, 
to the effect that a conjunction will structurally im- 
ply ally of its eonjuncts, provided the conchslon con- 
junct does not contain two asf's that are paraphrases 
of eachother. This concludes our formalization of the 
"subtree" intuition. If we want to cover more ground, 
we need a more liberal concept than the structural 
no~ion of one tree being a subtree of another. First, a 
more subtle structural notion may be employed. For 
instance, an inference from Each dog barks loudly to 
Each black dog barks must be allowed, it seems, even 
though none of the two M-trees is a p,'u't of the other. 
Therefore, a relation of constituent-wise comparabil- 
ity (~,, definition follows) is called for. It is impor- 
tant to note that the "direction" of the comparison 
(which of the two subsumes which) is irrelevant, since 
the logical requirement (i) determines the direction of 
the inference: 
Subtree Principle (3 r'~ version): If (i) T,, 
Wl, and (ii) Ta ~ T1, and (iii) (as above), 
then T,~ ~M Tb. 
If the notation ~-. stands for the symmetrical relation 
that holds between two trees if one of them is a sub- 
tree of the other, this is the definition of the relation 
Comparability: T,~ ~: TI, ~D~\] 
~rn, n > 0 such that T,~ = < Tal,...,Ta,.~ 
> and Tb = < Tbl,...,Tbm >, where either 
V Tai "-7 Tbi : Tai ~ Tbj or Y T~, s 3 T,,i : 
Tba' ~-- T,,. 
Here, T~ = < TuI,...,T~,, > means that T~ can be 
decomposed (at an arbitrary level of the tree) as the 
sequence W,,1 ,...,Tun. 
Example: The M-trees for Each black dog barks and 
Each dog barks stand in the relations ~c and ~, while 
the M-trees for Each dog barks loudly and Each black 
dog barks do not stand ill the relation ~, but they do 
stand in the relation "~c. They are constituent-wise 
comparable, so since the first logically implies (~) 
the second, the first must also have the second as a 
structural consequence (~M): 
87 
Each dog barks loudly 
Ta 
/ \ 
Each dog barks loudly 
Tal Ta2 
/ \ / \ 
Each dog barks loudly 
B1 B2 B3 B4 
Each black dog barks 
Tb 
/ \ 
Each bl. d. barks 
Tbl B3 
/\ 
B1 Tbl2 
Each black dog 
/ \ 
B5 B2 
black dog 
Here, T~ ~ Tb holds, for T,~ = < B1, B2, T,~2 >, and 
T~, = < B1, T, n2, B3 >, while B2 is a subtree of Tb12 
and B3 is a subtree of T,~2. End of Example 
Note that, by replacing the subtree notion by the 
symmetrical notion ~.c, we now allow a conclusion 
to introduce asf's that do not occur in the premisse. 
For instance, under appropriate assumptions, it will 
hold that S ~-M S and SL, for logically true SL. This 
defect can be remedied simply if we add a clause that 
prevents a conclusion from contaiuing any novel asf's (see (iv), below) 
So far, the Subtree Principle still formalizes a strictly 
structural approach. But there ought to be more tllan 
that. In the ideolect of a given language user, two 
grammar rules, or two lexical items, may be seman- 
tically related without any strictly structural notion 
being involved. Within the bounds of Validity ~nd 
the Subtree Principle, the grammar writer is free to 
designate certain pairs of syntax rules or lexical items 
as semantically related. Since, again, the direction of 
the relation is irrelevant, this refinement can easily 
be built in into the definition of ~. If this is done, 
the relation ,~¢ will also hold between Each mammal 
barks loudly and Each black dog barks, assuming that 
'mammal' and 'dog' are semantically related. Note, 
however, that these stipulations need not be the same 
for all language users: different stipulations of struc- 
tural relatedness may reflect differences in linguistic 
competence (\[Partee 1982\]). In short, ore' proposal 
implements the hypothesis that structural relations 
hold for everyone, while linguistic relations allow in- 
dividual variation. 
If all the suggested improvements on the Subtree 
Principle are taken into account, one might venture 
the followhlg definition of structural consequence: 
Subtree Principle(final version): T,, ~-'M 
Tb ~¢:~Def (i) We, ~ W~, and (ii) T,~ ~c Tb and 
(iii) If T1 is an asf that occurs essentially in 
T~ and T2 is an asf that occurs essentially 
ill Tb, theu T1 is not a paraphrase of T2, 
and (iv) all asf's of T~, occur in T~. 
Since the notion of a subtree has now been replaced 
by constituent-wise comparability, the notion of a 
paraphrase must be redefined: 
Paraphrase (final version): T1 is a para- 
phrase of T~. ¢>De.f T1 ~ T2 and T2 ~ T1 
but T1 ~ T2. 
Assuming that a notion of inference has been estab- 
lished along these lines, synonymy between M-trees 
can now be defined as nmtual structural consequence 
(synonymy of D-trees is analogous): 
Synonymy (final version): T1 and T2 are 
synonymous ¢>D~f TI bM T~ and To. bM 
T1. 
If the clauses in the first or the second version of 
the Subtree Principle are taken as collectively suffi- 
cient and necessary, the defined notion of synonymy 
coincides with the original Rosetta notion of ~hav- 
ing the same M-tree". (In this case, T,, ~M T~, and 
T~, ~M Z, ¢~ T,~ = T~,.) This conveniently simple 
situation breaks down in later versions of the Sub- 
tree Principle, where the relation of constituent-wise 
comparability is used. A simple example'. 
(a) John walks and John walks slowly, and 
(b) John walks slowly and John walks. 
The Subtree Principle (3 r¢t or final version) implies 
that (Z, ~M Tb) & (Ti, ~M Ta), and therefore, 
(a) and (b) are predicted to be synonymous, despite 
the difference between their corresponding M-trees 
-- which would have made them nonsynonymous in 
RosettWs original notion of synonymy. 
5 Applications and Limita- 
tions 
In section 4, we presented one among several possible 
ways ill which a notion of structural consequence can 
4 88 
be defined on the basis of Rosetta's M-trees. iNow we 
will indicate briefly how M-trees can be applied to 
propositional attitudes and to natural language gen- 
eration outside the context of automatic translation. 
But~ there is a caveat, discussed under the header of 
"mixed inference". 
Propositional Attitudes. Given that meanings 
are M-trees, the natural solution to the problem of 
"de dicto" propositional attitudes is to let epistemic 
attitudes denote a relation between an individual and 
an M-tree. Consequently, if a person x knows that 
S, while S' and S share the same M-tree, then x is 
predicted to also know that S ~. Since this amounts 
to a much stronger relation of synonymy than log- 
ical equivalence (t-equivalence), the problems noted 
in the in~roduction do not arise. The general situa- 
tion is that if x knows that S and S ~-M S", then it 
is predicted that x also knows that S H. 
Natural Language Generation. Even outside 
the domain of automatic translation, M-trees can be 
used for naturM language generation. For example, 
in a natural language question-answering application, 
the M-tree derived from the input-question can serve 
as a basis for generation, after some operations on the 
original M-tree, in which a yes-no question is changed 
into an affirmative or negative answer, for instance. 
In most applications where there is no M°tree avail- 
able, other means than M-trees can be used. For 
instance, when the user of a query system asks as- 
sistance from the computer's help facility, pre-stored 
natural language text can replace M-trees. 
Mixed Inference. We have seen that inference 
on the basis of M-trees is feasible, but how about in- 
ference on the basis of premises, some of which are 
purely logical while others are fully dressed M-trees? 
Two obvious approaches (where ~, denotes mixed 
inference, and T 4, is a variable over those M-trees hav- 
ing 4' corresponding to their top nodes) are 
(i) ¢, T p-~ ¢ ~,~ ¢,x t = ¢ (xisthe 
corresponding formula of T's top node), 
i IT, ~ r 
Neither solution is satisfying: the first leaves T's lin- 
guistic structure unused; and the second, which quan- 
tifies over all the possible ways in which ¢ can be 
expressed, is computationally intractable. The prob- 
lem wi~h mixed inference illustrates the one weak side 
of structured meanings: they let linguistic structure 
contribute to the meaning of an expression, but; it is 
impo~sible to say (in model theoretic terms) what it 
contributes. 

References 

\[Carnap 1947\] Carnap,R. Meaning and Ne- 
cessity. University of Chicago Press, 
Chicago. 

\[Cresswell 1985\] Cresswell, M.J. Structured 
Meanings. The Semantics of Propo- 
sitional Attitudes. MIT Press, Cam° 
bridge,M~s. 

\[van Deemter -89\] Structured Meanings Re- 
visited, IPO Manuscript 693-II, to appear 
in Bunt and van Hour (eds.) Language 
Technology. Foris, Dordrecht. 

\[de Jong and Appelo 1987\] de Jong,F. Ap- 
pelo,L. Synonymy and ~lYanslation, 1987 
In: Proceedings of the 6th. Amsterdam 
Colloquium. 

\[Landsbergen 1982\] Landsbergen,J. Ma- 
chine Translation Based on Loglcally Iso- 
morphic Montague Grammars. Proceed- 
ings COLING 1982. 

\[Landsbergen 1985\] L~ndsbergen,J. Isomer- 
phie Grammars and their use in the 
ROSETTA 'lYanslation System. Philips 
Research M.S. 12.950. In King,M. (ed.), 
Machine Translation: the state of the art. 
Edinburgh University Press. 

\[Landsbergen 1987\] 
L,~ndsbergen,J. Montague Grammar and 
Machine Translation, in Whitelock,P. et 
al. (eds.) Linguistic Theory and Com- 
puter Applications, Acad. Press, London. 

\[Lewis 1972\] General Seman- 
tics, in D.Davidson and G.H~rm,~n (eds.) 
Semantics of Natural Language. Reidel, 
Dordrecht. 

\[Montague 1974\] Montague,R. The Proper 
Treatment of Quantification in Ordinary 
English. In R.H.Thomason (ed.), Formal 
Philosophy. Yale University Press, New 
Haven ~md London. 

\[Partee 1982\] Partee,B. Belief Sentences and 
the Limits of Semantics. In Peters and 
Saarlnen (eds.) 

\[Peters and SnaP\]non 1982\] Peters,S. and 
Saarinen,E. (eds.) Processes, Beliefs and 
Questions, D.Reidel, Dordrecht. 
