File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/93/e93-1041_metho.xml
Size: 26,987 bytes
Last Modified: 2025-10-06 14:13:18
<?xml version="1.0" standalone="yes"?> <Paper uid="E93-1041"> <Title>A Tradeoff between Compositionality and Complexity in the Semantics of Dimensional Adjectives</Title> <Section position="3" start_page="348" end_page="350" type="metho"> <SectionTitle> 2 Compositionality in the Semantics </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="348" end_page="350" type="sub_section"> <SectionTitle> of Adjectives </SectionTitle> <Paragraph position="0"> There is a vast amount of linguistic data on which a formal semantics of adjectives can be evaluated, such as the interaction of comparative and equative complements with scope-bearing operators: quantitiers, logical connectives, modal operators and negative polarity items (e.g. John is taller than I will ever be). A good theory must also account for the phenomenon of markedness, i.e. the semantic asymmetry of the antonyms (see \[Lyons, 1977, Sect. 9.1\]). However, I will ignore these issues in order to focus on the matter of compositionality. Thus I classify the existing theories of adjective meaning very coarsely as 'compositional' or 'non-compositional'. Note that these labels indicate only whether or not the treatment of difference and factor terms is compositional (in other respects, all of the theories mentioned below are compositional).</Paragraph> <Paragraph position="1"> To begin with, I presuppose a component of dimensional designation that determines which prop-erty of an object is described by an adjective, thus 1I have only recently become acquainted with Eero Hyv5nen's &quot;tolerance propagation&quot; (TP) approach to constraint propagation over intervals (see \[Hyv6nen, 1992\]), which in some circumstances can compute solutions that are superior to those of the Waltz algorithm, but at the price of increased complexity. I comment on this briefly in section 3.2.</Paragraph> <Paragraph position="2"> a. Positive amount(length(board)){'q / r--}Nc(length(board)) b. Comparative amount(length(board)){-q / F)amount(width(table)) c. Equative amount(length(board)) ~ amount(width(table)) d. Measurement amount(length(board))= (50, cm) Table 1: Non-compositional approach a. Positive amount(length(board)){'q / r}D + We(length(board)) b. Comparative amount(length(board)){~ / f-}D rl: amount(width(table)) c. Equative amount(length(board)) ~_ n x amount(width(table)) d. Measm-ement amount(length(board)))=(50, em) determining that short conference describes a duration but short stick describes the length of the stick's elongated axis. Each class of properties (duration, length, etc.) is assumed to be associated with a set of degrees reflecting their magnitudes. I will simply use the function expression amount(p(x)) to denote the degree to which entity x exhibits property p. Each set of degrees is assumed to be ordered, and I will use the symbols I- and E for the ordering relation. Most authors assume measurement theory (\[Krantz et al., 1971\]) as the axiomatic basis in the formal semantics of linguistic measurement expressions (cf. \[Klein, 1991\]). For measurement expressions such as 3 cm, I simply use a tuple (3, cm) denoting a degree. Finally, I follow \[Bierwisch, 1989\] in using the symbol We(a) for the 'norm' expected for amount a in context C. This reflects the usual assumption that the positive expresses a relation to a context-dependent standard. In this paper, I will restrict my attention to norms that are typical for the categories named in the sentence, such as tall for an adult Dutchman, slow for a sports car, etc. 2 The class of theories that I am referring to as 'non-compositional' include those of \[Cresswell, 1976\], \[Hoeksema, 1983\] and \[Pinkal, 1990\], who propose formulas similar to those in Table 1 as interpretations of the sentences in (3). The relation used in place of the expression {-\] / \[-'} is -1 for the unmarked case (e.g. tall) and 1- for the marked case (short) .3 2Clearly, there are many other kinds of norms. Jan is tall may mean tall for his age, taller than I expected, etc. \[Sapir, 1944\] is still one of the best surveys of the norms employed in natural language, while Bierwisch has a more modern analysis.</Paragraph> <Paragraph position="3"> 3Of course, Tables 1 and 2 are strong simplifica-I call this approach non-compositional because interpretations of the differential comparative (6 cm longer than) and of the equative with factor term (three times as long as) are not derivable from the formulas shown in lines (b) and (c) (the same can be said of \[Kamp, 1975\] and \[Klein, 1980\]).</Paragraph> <Paragraph position="4"> The compositional approach is taken by \[Hellan, 1981\], \[von Stechow, 1984\] and \[nierwisch, 1989\], whose renderings of (3) are, in simplified form, something like those in Table 2. The symbol '+' is + in the unmarked case and - in the marked case, and 'x' stands for scalar multiplication. 4 In the case of the positive and the ordinary comparative, the difference term D is existentially quantified, as is the factor term n in the case of the ordinary equative (with the additional condition that n is greater than or equal to one). But if the difference or factor term is realized in the sentence surface, then its contribution to (b) and (c) in \]?able 2 is embedded compositionally. 5 tions that fail to reflect important differences between the authors mentioned that are unrelated to the issue of compositionaiity.</Paragraph> <Paragraph position="5"> 4In measurement theory, the '+' operation is interpreted as concatenation in the empirical domain, and scalar multiplication is interpreted as repeated concatenation. Krantz et at. \[1971\] show that under proper axiomatization, concatenation is homomorphic to addition on the reals.</Paragraph> <Paragraph position="6"> SBierwisch \[1989\] differs from the other authors advocating a compositional approach in that he does not assume the interpretation of the equative shown in Table 2. He points out (p. 85) that this analysis does not account for the fact that the equative is norm-related in the unmarked case: Fritz is as short as Hans presupposes that Fritz and Hans are short. Moreover, it is not clear whether this approach can capture the duality of For the computational analysis, we will need to classify the relations shown in Tables 1 and 2, since these relations form the input to a knowledge base.</Paragraph> <Paragraph position="7"> But to do so, we must first decide what sorts of entities the difference and factor terms denote. I assume that they do not denote constants, since we may be just as uncertain of their magnitudes as we are of the other magnitudes mentioned in the sentences. Thus it should be possible to treat each of the mini-discourses in (4)-(6) in a similar fashion: (4) a. The board is 90 to 100 cm long.</Paragraph> <Paragraph position="8"> b. In fact, it is about 95 cm long.</Paragraph> <Paragraph position="9"> (5) a. The board is longer than the table is wide.</Paragraph> <Paragraph position="10"> b. In fact, it is about 6 cm longer.</Paragraph> <Paragraph position="11"> (6) a. The board is five to ten times as long as the table is wide.</Paragraph> <Paragraph position="12"> b. In fact, it is about seven times as long.</Paragraph> <Paragraph position="13"> The information given in (b) in (4)-(6) can be accounted for by simply modifying the terms introduced in (a). Hence, the difference and factor terms, like the 'amount' terms in Tables 1 and 2, denote uncertain quantities whose magnitude may be constrained by sets of sentences. I will refer to these terms generally as 'parameters'.</Paragraph> <Paragraph position="14"> With this assumption, we can classify the relations in Tables 1 and 2 as follows: (7) Non-compositional a. Ordering relations (Positive, Comparative, Equative) b. Linear relations of the form amount(x) + D ~ amount(y) (Differential Comparative) c. Product relations of the form n x amount(x) ~_ amount(y) (Equative with factor term) (8) Compositional a. Linear relations (Positive, Comparative, Differential Comparative) b. Product relations (Equative with & without factor term) In both approaches, measurements simply serve to identify the degree to which an object exhibits the property in question.</Paragraph> <Paragraph position="15"> Under the compositional approach, it is possible to assume a single semantic representation in the lexicon for each adjective stem and each morphosyntactic category such that the formulas in Table 2 are generated from those lexical entries. Bierwisch \[1989\], for example, proposes lexical entries of the following form for each dimensional adjective: ~c~x\[amount(p(x) ) = (v :t: c)\] comparatives and equatives: Fritz is taller than Hans should be semantically equivalent to Hans is not as tall as Fritz. However, Bierwisch does assume a representation like this for equatives with realized factor terms. where c is a difference value and v is a comparison value (see \[nierwisch, 1989\] for details).</Paragraph> <Paragraph position="16"> But the elegance of the compositional approach comes at the price of lexicM semantic representations that include addition and multiplication operators~ which is precisely what Pinkal \[1990\] and Klein \[1991\] have criticized: they find the assumption of mathematical operations as basic constituents of lexical meaning uncomfortably strong. This is one of the reasons why Pinkal proposes separate lexical entries for each morphosyntactic form of an adjective.</Paragraph> </Section> </Section> <Section position="4" start_page="350" end_page="353" type="metho"> <SectionTitle> 3 The Complexity of Constraint Propagation </SectionTitle> <Paragraph position="0"> The objection to the complexity of the lexical meaning representations required for the compositional approach appeals to intuitions of parsimony, and is in part a matter of philosophical opinion that may be difficult to resolve. Perhaps a decision could be made on the basis of psycholinguistic experimentation, but I will pose a more utilitarian question in this section by examining whether the increase in representational complexity in the transition from Table 1 to Table 2 entails an increase in the computational complexity of reasoning for a knowledge base containing those representations. The reasoning paradigm to be investigated is constraint propagation (sometimes called constraint satisfaction) over real-valued intervals.</Paragraph> <Paragraph position="1"> Intervals are intended to account for uncertainty in quantitative knowledge. For example, the measurement of a parameter at 20 units on some scale with a possible measurement error of +0.5 units is represented as \[19.5, 20.5\], to be interpreted as meaning that the unknown measurement value in question lies somewhere in the set {x119.5 <_ x <_ 20.5}. Additional knowledge about the relations that hold between parameters constrains their possible values to smaller sets (hence the term 'constraints' for the propositions in a knowledge base expressing such relations). null Constraint propagation over intervals has been applied in spatial reasoning (\[McDermott and Davis, 1984; Davis, 1986; Brooks, 1981; Simmons, 1992\]), temporal reasoning (e.g. \[Dean, 1987; Allen and Kautz, 1985\]) and in systems of qualitative physics (see \[Weld and deKleer, 1990; Bobrow, 1985\]). Intervals have a very obvious weakness in that the highly precise choice of endpoints can rarely be well-motivated in natural domains such as these. In particular, the reasoner may draw very different inferences, e.g. about whether two intervals overlap, if the endpoint of some interval is changed by what seems to be an insignificant amount. Thus, as Me.-Dermott and Davis\[1984\] note, such a system must not only be able to report whether they overlap, but also &quot;how close&quot; they come to overlapping.</Paragraph> <Paragraph position="2"> If they do come close ..., then ...\[the reasoner\] must decide whether to act on the suspect information or work to gather more, which is really the only interesting decision in a case like this. Eventually, when all possible information has been gathered, if things are still close to the borderline then a decision maker must just use some arbitrary criterion to make a decision. We don't see how anyone can escape this. \[McDermott and Davis, 1984, p. 114\] A formalism such as fuzzy logic attempts to alleviate the problem of sharp borderlines by using infinitely many intermediate truth values for vague predicates. I happen to have reservations about the adequacy of fuzzy logic for this task 6, but I have chosen to study constraint propagation mainly because its computational properties are well-researched and are attractive for applications in which the potential overprecision of endpoints can be tolerated. Thus it provides a sound basis for comparing the semantic analyses presented in section 2.</Paragraph> <Section position="1" start_page="351" end_page="351" type="sub_section"> <SectionTitle> 3.1 Syntax and Semantics </SectionTitle> <Paragraph position="0"> In the following, I briefly review some definitions from \[Davis, 1987, Appendix B\] (with slight modifications) null Syntax Assume a set of symbols X = {XI,..., X v} called parameters. A label is written \[z_, x+\] with real numbers 0 < z_ <__ z:~; the symbol oo may also be used for z_ and z+. A labelling L for X is a function from parameters to labels. If L is understood, we write Xi - \[z_, z+\] for L(Xi) = \[z_, z+\]. A constraint is a formula over parameters in X in some accepted notation (e.g. X1 x X2 = )(3 or</Paragraph> <Paragraph position="2"> set C of constraints over X, and a labelling L for X.</Paragraph> <Paragraph position="3"> Semantics A valuation V for X is a function from the parameters to reals. The denotation of a label \[z_,z+\] is the set D(\[z_,z+\]) = {z\[z_ < z _< z+} if z+ # oo, D(\[z_,co\]) = {z\]z_ _< z} if z_ # oo, D(\[oo, oo\]) -- {oo) otherwise. A labelling L is interpreted as restricting the set of possible valua-SThis is not because I object to the notion of truth measurement, but rather because I believe that the fuzzy logicians' assumption that the connectives of a logic of vagueness are truth functional is contradicted by the facts of human reasoning about vague concepts (as argued by \[Pinkal, to appear\]). In my opinion, a formalism for truth measurement would have to be more like probability theory.</Paragraph> <Paragraph position="4"> TI assume the non-negative reals for simplicity, because most of the physical properties mentioned in the examples have non-negative measurement scales. Even some of the exceptions, such as the common temperature scales, ate in fact equivalent to a scale of non-negative values.</Paragraph> <Paragraph position="5"> tions for X to those V such that for all Xi E X, if L(XI) = \[x_,z+\], then V(X~) E D(\[x_,z+\]). Thus we may view L as denoting a set of valuations on the parameters; we refer to this set as V(L).</Paragraph> <Paragraph position="6"> A constraint C i denotes the largest set of valuations that are consistent with the relation expressed by Cj; call this set V(Cj).</Paragraph> </Section> <Section position="2" start_page="351" end_page="353" type="sub_section"> <SectionTitle> 3.2 Constraint Propagation Algorithms </SectionTitle> <Paragraph position="0"> The task of a constraint propagation algorithm (CPA) is to tighten the interval labels in an attempt to either (1) find a labelling that is just tight enough to be consistent with the constraints and initial labelling, or (2) signal inconsistency. Constraint propagation separates a stage of assimilation, during which intervals are tightened, from querying, during which the tightened values are reported. It is also possible to infer previously unknown relations between the parameters in the querying stage by inspecting the tightened intervals. This method of reasoning may be applied in the linguistic application under study, for example to derive the sentences in (2) above from (1).</Paragraph> <Paragraph position="1"> A CPA is sound if V(Cl)n...VIV(Cn)nV(LI) C_ V(L) for every labelling L returned by the algorithm, where {el,...,Ca} is the set of constraints in the system and L1 is the initial labelling. It is complete if V(L) C V(Cl) n ... VI V(Cn) N V(L1) for every L that it returns. In other words, the algorithm is sound if it does not eliminate any values that are consistent with the starting state of the system, and complete if it returns only such values.</Paragraph> <Paragraph position="2"> As we will see, CPA's for intervals can only be complete under very restricted circumstances. Thus Davis defines a weaker form of completeness for the assimilation process. A CPA is complete for assimilation if every labelling L that it returns as\[z_,x+\] such that if Vi(Xi) e signs labels Xi - i i D(\[zi.., z~.\]), then l~ * Y(C1) n... N Y(Cn). That is, the label assigned to each parameter accurately reflects the range of values it may attain given the constraints in the system.</Paragraph> <Paragraph position="3"> The Waltz algorithm, which is stated below, is superior to many other CPA's in these respects. It is a sound algorithm, unlike the Monte Carlo method used by \[Davis, 1986\] and the hill-climber used by \[McDermott and Davis, 1984\]. Moreover, for constraint systems containing restricted types of constraints, the Waltz algorithm is complete for assimilation and terminates very quickly. In contrast, Davis reports that the h{ll-climbers used by \[McDermott and Davis, 1984\] were prohibitively slow and unreliable.</Paragraph> <Paragraph position="4"> The algorithm is based on an operation called refinement, defined as follows. Given a constraint Cj, a parameter Xi appearing in Cj, and labelling L de-</Paragraph> <Paragraph position="6"> tTerminates in arbitrarily long (finite) time if the system is inconsistent tMay not terminate if the solution is inadmissible (see text) This is the set of values of Xi that consistent with both the labelling and the constraint.</Paragraph> <Paragraph position="7"> The two refinement operators for a constraint Cj and parameter Xi are functions from labellings to labellings, written R-(Xi,Cj) and R+(Xi,Cj).</Paragraph> <Paragraph position="8"> If L(Xi) = \[x/_,x~\], then R-(Xi,Q)(L)is formed by replacing x/__ in L with the lower bound of REFINE(Cj, Xi, L), and R+(Xi, Cj)(L) is formed by replacing x~ in L With the upper bound of REFINE(Cj, Xi, L). We say that these refinements are based on Cj. If the upper and lower bounds of REFINE are computable, then refinement is by definition a sound operation.</Paragraph> <Paragraph position="9"> For a constraint system C = (X, {C1,..., Ca}, L), L is quiescent for a set of refinement operators R =</Paragraph> <Paragraph position="11"> solution to C (if it exists) is the labelling L' denoting the largest set of valuations V(L') C_ V(L)N V(Ct)N * .. f'l V(C,~) such that L' is quiescent for any set of refinements based on the constraints in the system.</Paragraph> <Paragraph position="12"> If no such solution exists, then C is inconsistent.</Paragraph> <Paragraph position="13"> The Waltz algorithm repeatedly executes refinements until the system is quiescent, and returns the solution (or signals inconsistency) if it terminates (cf.</Paragraph> <Paragraph position="14"> \[Davis, 1987, p. 286\]).</Paragraph> <Paragraph position="15"> procedure WALTZ L *-- the initial labelling Q *-- a queue of all constraints while Q ~ @ do begin remove constraint C from Q for each Xi appearing in C if REFINE(X~, C, L) = then return INCONSISTENCY else L *-- the result of executing R-(Xi, C) and n+ ( xi , C) on L for each Xi whose label was changed for each constraint C' ~ C in which Xi appears add C I to Q end Since refinement is a sound operation, the Waltz algorithm is sound. The completeness, termination and time complexity of the algorithm depends on what kinds of relations appear as constraints in the system, and on the order in which constraints are taken off the queue. The results for systems consisting exclusively of one of the three kinds of relations mentioned in (7)-(8) in section 2 are given in Table 3, under the assumption that constraints are selected in FIFO order or a fixed sequential order (other orderings lead to worse results). Time complexity is measured as the number of iterations through the main loop of the algorithm. For comparison, Table 3 also gives the best known times for complete solutions to systems of such relations, s In the linguistic application proposed here, the term S in Table 3 (the sum of the lengths of all of the constraints) is proportional to c (the number of constraints), since there are no more than three parameters in each constraint. Hence, O(pS) is O(pc) in this application.</Paragraph> <Paragraph position="16"> Note that Table 3 gives results for linear inequalities with unit coefficients (of the form p < )'~ Xi ~j Xj < q, where no coefficients differ from 1 or -1). These are the only kind of linear inequalities under consideration in the linguistic application. In general, the Waltz algorithm breaks down if the system contains more complex relations, such as linear inequalities with arbitrary coefficients or product relations, since it may go into infinite loops even if the starting state of the system was consistent. Consider, for example, the set of constraints {nl x X = Y, n2 x X = Y} with the starting labels nl - \[1; 1\], n2 --&quot; \[2, 21, X - \[0,100\] and Y - \[0,100\]. The system continually bisects the upper bounds of X and Y without ever being able to reach the solution, which SHyvSnen's \[HyvSnen, 1992\] tolerance propagation (TP) approach is similar to the Waltz algorithm, but it uses a queue of solution functions from interval arithmetic \[Alefeld and Herzberger, 1983\] rather than refinement operations. The &quot;global TP&quot; method computes complete solutions, but at the price of increased complexity. In the &quot;local&quot; mode, tolerance propagation is very similar to the Waltz algorithm in its computational properties.</Paragraph> <Paragraph position="18"> lower bounds are continually doubled without reaching the solution X - leo, ~\] and Y - \[oo, oo\].</Paragraph> <Paragraph position="19"> However, it is shown in \[Simmons, 1993\] that this happens only if the solution contains labels of this kind. Define a label as admissible if it is not equal to \[0, 0\] or \[0% oo\]; otherwise, it is inadmissible. A labelling L is admissible if it only assigns admissible labels; otherwise, L is inadmissible. Then it can be shown that if a system of product constraints is consistent and its solution is admissible, then the Waltz algorithm terminates in O(pS) time. Moreover, if the system is inconsistent, the algorithm will find the inconsistency in finite but arbitrarily long time.</Paragraph> <Paragraph position="20"> Unfortunately, the proof is too long to include in the present paper, but a brief outline of the argument is given in the Appendix.</Paragraph> <Paragraph position="21"> Systems with linear inequalities or product constraints are liable to enter infinite or very long loops if the starting state is inconsistent (or if the solution is inadmissible in the case of products). Davis \[1987, p. 305-306\] suggests a strong heuristic for detecting and terminating such long loops: stop if we have been through the queue p times (for p parameters). He is not clear on what he means by &quot;having been through the queue z times&quot;, but I interpret him as meaning that we should stop if any constraint has been taken off the queue more often than p times. The rationale is the observation that in practice, most systems that do terminate normally seem to do so before this condition is fulfilled, much sooner than the worst-case time predicted by the complexity analysis. The reliability of such a heuristic is one of the topics of the next subsection.</Paragraph> </Section> <Section position="3" start_page="353" end_page="353" type="sub_section"> <SectionTitle> 3.3 Empirical Testing </SectionTitle> <Paragraph position="0"> The analytic results given in the previous subsection have left two important questions open: * What is the complexity of constraint propagation if the system contains different kinds of constraints? null * How reliable is Davis' heuristic for terminating infinite (or very long) loops? The first question lends itself to an analytic answer, but the results are not known at present. But we can seek empirical evidence by running the algorithm on mixed systems of constraints to see if the time to termination is significantlY greater than the complexity expected for systems containing just the most complex type of relation in the system. If this does not happen for a number of representative systems, we may conjecture that the combination of constraints has not made the problem more complex. The second question can only be answered empirically, by testing whether the heuristic tends to terminate the algorithm too soon (i.e. whether it terminates refinement of systems that might have terminated normally in a short time).</Paragraph> <Paragraph position="1"> Empirical investigations of these questions are reported in \[Simmons, 1993\], and described briefly here. To investigate the first question, the algorithm was run on a number of large, consistent constraint systems with admissible solutions in which the three types of constraints shown in Table 3 appeared in approximately equal numbers. On each run, the constraints in the initial queue were permuted randomly to suppress the possible effects of ordering. None of these runs required more time to termination than is predicted by the O(pS) result for systems containing just unit linear inequalities or just product constraints.</Paragraph> <Paragraph position="2"> To investigate the second question, I attempted to build consistent constraint systems with admissible solutions that are terminated by Davis' heuristic sooner than they would have been normally. It turns out that the algorithm runs to completion on almost all systems that were tested long before any constraint is taken off the queue p times, although there are systems for which refinement is terminated too soon on this heuristic. If the limit is increased by a constant factor, e.g. if assimilation is stopped after some constraint is processed 2p times, then the risk of early termination is greatly reduced.</Paragraph> <Paragraph position="3"> In all, the empirical results on the open questions mentioned above have been encouraging. It is an admitted weakness of these tests, however, that they were performed on systems built by hand, not on constraint systems that occur &quot;naturally&quot; as part of an NL interface to a KR system.</Paragraph> </Section> </Section> class="xml-element"></Paper>