File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/93/e93-1013_metho.xml

Size: 21,450 bytes

Last Modified: 2025-10-06 14:13:18

<?xml version="1.0" standalone="yes"?>
<Paper uid="E93-1013">
  <Title>LFG Semantics via Constraints</Title>
  <Section position="3" start_page="98" end_page="100" type="metho">
    <SectionTitle>
2 Theoretical preliminaries
</SectionTitle>
    <Paragraph position="0"> In the following, we describe two linguistic assumptions that underlie this work. First, we assume that various aspects of linguistic structure (phonological, syntactic, semantic, and other aspects) are formally represented as projections and are related to one another by means of functional correspondences. We also assume that the relation between the thematic roles of a verb and the grammatical functions that realize them are specified by means of mapping principles which apply postlexically.</Paragraph>
    <Paragraph position="1"> Projections. We adopt the projection architecture proposed by Kaplan \[1987\] and Halvorsen and Kaplan \[1988\] to relate f-structures to representations of their meaning: f-structures are put in functional correspondence with semantic representations, similar to the correspondence between nodes of the constituent structure tree and f-structures. The semantic projection of an f-structure, written with a subscript ~r, is a representation of the meaning of that f-structure.</Paragraph>
    <Paragraph position="2"> Thus, the notation 'Ta' in the lexical entries given in Figure 1 stands for the semantic projection of the f-structure 'T'; similarly, '(T svBJ)a' is the semantic projection of (i&amp;quot; StTBJ). The equation T,= Bill indicates that the semantic projection of 1&amp;quot;, the f-structure introduced by the NP Bill, is Bill. The lexical entry for Hillary is analogous. When a lexical entry is used, the metavariable '~&amp;quot; is instantiated and replaced with an actual variable corresponding to an f-structure f,~ \[Kaplan and Bresnan, 1982, page 183\].</Paragraph>
    <Paragraph position="3"> Similarly, the metavariable 'T~' is instantiated to a logic variable corresponding to the meaning of the f-structure. In other words, the equation T~,= Bill is instantiated as fna = Bill for some logic variable  fnatecedents). It arises from transferring to linear logic the ideas underlying the concurrent constraint programming scheme of Saraswat \[1989\] -- an explicit formulation for the higher-order version of the linear concurrent constraint programming scheme is given in Saraswat and Lincoln \[1992\]. A nice tutorial introduction to linear logic itself may be found in Scedrov \[1990\].</Paragraph>
    <Paragraph position="4"> We have used the multiplicative conjunction (r) and linear implication -o connectives of linear logic, rather than the analogous conjunction A and implication ~ of classical logic. For the present, we can think of the linear and classical connectives as being identical. Similarly, the of course connective '!' of linear logic can be ignored for now. Below, we will discuss respects in which the linear logic connectives have properties that are crucially different from their counterparts in classical logics.</Paragraph>
    <Paragraph position="5"> Mapping principles. We follow Bresnan and Kanerva \[1989\], Alsina \[1993\], Butt \[1993\] and others in assuming that verbs specify an association between each of their arguments and a particular thematic role, and that mapping principles associate these thematic roles with surface grammatical functions; this assumption, while not necessary for the treatment of simple examples such as the one discussed in Section 3, is linguistically well-motivated and enables us to provide a nice treatment of complex predicates, to be discussed in Section 5.</Paragraph>
    <Paragraph position="6"> The lexical entry for kiss specifies the denotation of (T PRED): it requires two arguments which we will label agent and theme. Mapping principles ensure that each of these arguments is associated with some grammatical function: here, the SUBa of kiss (Bill) is interpreted as the agent, and the OBa of kiss (Hillary) is interpreted as the theme. The specific mapping principles that we assume are given in Figure 2.</Paragraph>
    <Paragraph position="7"> The function of the mapping principles is to specify the set of possible associations between grammatical functions and thematic roles. This is done by means of implication. Grammatical functions always appear on the left side of a mapping principle implication, and the thematic roles with which those grammatical functions are associated appear on the right side. Mapping principle (1), for example, relates the thematic roles of agent and theme designated by a two-argument verb like kiss to the grammatical functions that realize these arguments: it states that if a suBJ and an osa are present, this permits the deduction that the thematic role of agent is associated with the suBJ and the thematic role of theme is associated with the oBJ. (Other associations are encoded by means of other mapping principles; the mapping principles given in Figure 2 encodes only two of the possibilities.) We make implicit appeal to an independentlygiven, fully-worked-out theory of argument mapping, from which mapping principles such as those given in Figure 2 can be shown to follow. It is important to note that we do not intend any claims about the correctness of the specific details of the mapping principles given in Figure 2; rather, our claim is that mapping principles should be of the general form illustrated there, specifying possible relations between thematic roles and grammatical functions.</Paragraph>
    <Paragraph position="8"> In particular, no theoretical significance should be</Paragraph>
    <Paragraph position="10"> (i) !(Vf, X,Y. ((f SUBJ). = X) (r) ((f OBJ). = Y) -0 agent(( I PRED).,X) (r)theme((f PRED).,Y)) (2) I(VI, X, Y, Z. ((f SUB.I). = X) (r) ((f OBJ). = Y) (r) ((f onJ2). = Z) -o permitter((f PRED)., X) (r) agellt((f PRED)., Z) (r) theme(( I PRED)., Y))  bill: (f2o = Bill) hillary : (fa. = Hillarv) kiss: (VX, Y. agent(f1., X) (r) theme(f1., Y) --0 f4. = kiss(X, g)) mappingl : (VX, Y. (f2. = X) (r) (f3. = Y) --o agent(f1., X) (r) theme(f1., g))) (bill (r) hillary (r) kissed (r) mappingl) --o agent(ft., Bill) (r) ~heme(fl., H illarv) (r) kissed --o f4. = kiss(Bill, Hillarv)  attached to the choice of thematic role labels used here; for the verb kiss, for example, labels such as 'kisser' and 'kissed' would do as well. We require only that the thematic roles designated in the lexical entries of individual verbs are specified in enough detail for mapping principles such as those illustrated in Figure 2 to apply successfully.</Paragraph>
    <Paragraph position="11">  The meaning associated with the f-structure may be derived by logical deduction, as shown in Figure 3. a degAn alternative derivation, not using mapping principles, is also possible. In that case, the lexical entry for kissed would require a SUBJ and an OBJ rather than an agent and a theme, and the derivation would proceed in The first three lines contain the information contributed by the lexical entries for Bill, tIillarv, and kissed, abbreviated as bill, hillary, and kissed. The verb kissed requires two pieces of information, an agent and a theme, in no particular order, to produce a meaning for the sentence, f4,. The mapping principle needed for associating the syntactic arguments of transitive verbs with the agent/theme argument structure is given on the fourth line and abbreviated as mapping1. Mapping principles are assumed to be a part of the background theory, rather than being introduced by particular lexical items. Each mapping principle can, then, be used as many or as few times as necessary.</Paragraph>
    <Paragraph position="12"> The premises--i.e., the lexical entries and mapping principle are restated as the first step of the derivation, labeled 'Premises'. The second step is derived from the premises by Universal Instantiation and Modus Ponens. The last step is then derived from this result by Universal Instantiation and Modus Ponens.</Paragraph>
    <Paragraph position="13"> To summarize: a variable is introduced for the meaning corresponding to each f-structure in the this way: ((f~</Paragraph>
    <Paragraph position="15"> syntactic representation. These variables form the scaffolding that guides the assembly of the meaning.</Paragraph>
    <Paragraph position="16"> Further information is then introduced: information associated with each lexical entry is made available, as are all the mapping rules. Once all this information is present, we look for a logical deduction of a meaning of the sentence from that information.</Paragraph>
    <Paragraph position="17"> The use of linear logic provides certain advantages, since it allows us to capture the intuition that lexical items and phrases contribute uniquely to the meaning of a sentence. As noted by Klein and Sag \[1985, page 172\]: Translation rules in Montague semantics have the property that the translation of each component of a complex expression occurs exactly once in the translation of the whole .... That is to say, we do not want the set S \[of semantic representations of a phrase\] to contain all meaningful expressions of IL which can be built up from the elements of S, but only those which use each element exactly once.</Paragraph>
    <Paragraph position="18"> Similar observations underlie the work of Lambek \[1958\] on categorial grammars and the recent work of van Benthem \[1991\] and others on dynamic logics. It is this 'resource-conscious' property of natural language semantics - a meaning is used once and once only in a semantic derivation - that linear logic allows us to capture. The basic insight underlying linear logic is to treat logical formulas as finite resources, which are consumed in the process of deduction. This gives rise to a notion of linear implication --o which is resource-conscious: the formula A --o B can be thought of as an action that can consume (one copy of) A to produce (one copy of) B.</Paragraph>
    <Paragraph position="19"> Thus, the formula A(r) (A --o B) linearly implies Bbut not A (r) B (because the deduction consumes A), and not (A --o B) (r) B (because the linear implication is also consumed in doing the deduction). The resource consciousness not only disallows arbitrary duplication of formulas, but also arbitrary deletion of formulas. This causes the notion of conjunction we use ((r)) to be sensitive to the multiplicity of formulas: A(r)A is not equivalent to A (the former has two copies of the formula A). For example, the formula A (r) A (r) (A -o B) does linearly imply A (r) B (there is still one A left over) -- but does not linearly imply B (there must still be one A present). Thus, linear logic checks that a formula is used once and only once in a deduction, reflecting the resource-consciousness of natural language semantics. Finally, linear logic has an of course connective ! which turns off accounting for its formula. That is, !A linearly implies an arbitrary number copies of A, including none. We use this connective on the background theory of mapping principles to indicate that they are not subject to accounting; they can be used as often or seldom as necessary.</Paragraph>
    <Paragraph position="20"> A primary advantage of the use of linear logic is that it enables a clean semantic definition of completeness and coherence. 4 In the present setting, the feature structure f corresponding to the utterance is associated with the ((r)) conjunction C/ of all the formulas associated with the lexical items in the utterance. The conjunction is said to be complete and coherent iff Th t- C/ --o fa = t (for some term t), where Th is the background theory containing, e.g., the mapping principles. Each t is to be thought of as a valid meaning for the sentence. This guarantees that the entries are used exactly once in building up the denotation of the utterance: no syntactic or semantic requirements may be left unfulfilled, and no meaning may remain unused.</Paragraph>
  </Section>
  <Section position="4" start_page="100" end_page="101" type="metho">
    <SectionTitle>
4 Modification
</SectionTitle>
    <Paragraph position="0"> Another primary advantage of the use of linear logic 'glue' in the derivation of meanings of sentences is that it enables a clear treatment of modification.</Paragraph>
    <Paragraph position="1"> Consider the following sentence, containing the sentential modifier obviously: (4) Bill obviously kissed Hillary.</Paragraph>
    <Paragraph position="2"> We make the standard assumption that the verb kissed is the main syntactic predicate of this sentence. The following is the f-structure for example  We also assume that the meaning of the sentence can be represented by the following formula:</Paragraph>
    <Paragraph position="4"> It is clear that there is a 'mismatch' of sorts between the syntactic representation and the meaning of the sentence; syntactically, the verb is the main functor; while the main semantic functor is the adverb. 5 Consider now the lexical entry for obviously given in Figure 4. The semantic equation associated with 4'An f-structure is locally complete if and only if it contains all the governable grammatical functions that its predicate governs. An f-structure is complete if and only if all its subsidiary f-structures are locally complete. An f-structure is locally coherent if and only if all the governable grammatical functions that it contains are governed by a local predicate. An f-structure is coherent if and only if all its subsidiary f-structures are locally coherent.' \[Kaplan and Bresnan, 1982, pages 211-212\] 5The related phenomenon of head switching, discussed in connection with machine translation by Kaplan et al.</Paragraph>
    <Paragraph position="5"> \[1989\] and Kaplan and Wedekind \[1993\], is also amenable to treatment along the lines presented here.</Paragraph>
    <Paragraph position="7"> obviously makes use of 'inside-out functional uncertainty' \[Halvorsen and Kaplan, 1988\]. The expression (MODS T) denotes an f-structure through which there is a path MODS leading to T. For example, if T is the f-structure labeled f5 above, then (MODS T) is the f-structure labeled f4, and (MODS T)a is the semantic projection of f4- Thus, the lexical entry for obviously specifies the semantic representation of the f-structure that it modifies, an f-structure in which it is properly contained.</Paragraph>
    <Paragraph position="8"> Recall that linear logic enables a coherent notion of consumption and production of meanings. We claim that the semantic function of adverbs (and, indeed, of modifiers in general) is to consume the meaning of the structure they modify, producing a new, modified meaning. Note in particular that the meaning of the modified structure, (MOPS T)a, appears on both sides of -o ; the unmodified meaning is consumed, and the modified meaning is produced.</Paragraph>
    <Paragraph position="9"> The derivation of the meaning of example 4 is shown in Figure 5. The first part of the derivation is the same as the derivation shown in Figure 3 for the sentence Bill kissed Hillary. The crucial difference is the presence of information introduced by obviously, shown in the fourth line and abbreviated as obviously. In the last step in the derivation, the linear implication introduced by obviously consumes the previous value for f4a and produces the new and final value.</Paragraph>
    <Paragraph position="10"> By using linear logic, each step of the derivation keeps track of what 'resources' have been consumed by linear implications. As mentioned above, the value for f4C/ is a meaning for this sentence only if there is no other information left. Thus, the derivation could not stop at the next to last step, because the linear implication introduced by obviously was still left. The final step provides the only complete and coherent meaning derivable for the utterance.</Paragraph>
  </Section>
  <Section position="5" start_page="101" end_page="103" type="metho">
    <SectionTitle>
5 Valence-changing operations
</SectionTitle>
    <Paragraph position="0"> We have seen that modifiers can be treated as 'consuming' the meaning of the structure that they modify, producing a new, modified meaning. A similar, although syntactically more complex, case arises with complex predicates, as Butt \[1990; 1993\] shows.</Paragraph>
    <Paragraph position="1"> Butt discusses the 'permissive construction' in Urdu, illustrated in 7:</Paragraph>
    <Paragraph position="3"> 'Hillary let Bill write a letter.' She shows that although the permissive construction is seemingly biclausal, it actually involves a complex predicate: a syntactically monoclausal predicate formed in the presence of the verb diyaa 'let'. In the case at hand, the presence of diyaa requires an  VX, Y. agent( ( T PRED)o, X) (r) theme( ( T FRED)o, Y) --o To = write(X, Y) diyaa V VX, P. permitter((T PREO)o, X)(r) To= P --o T~= let(X, P)  additional argument which we will label 'permitter', in addition to the arguments required by the verb likhne 'write'. In general, the verb diyaa 'let' modifies the argument structure of the verb with which it combines, requiring in addition to the original inventory of arguments the presence of a permitter. The f-structure for example 7 is:</Paragraph>
    <Paragraph position="5"> As Butt points out, the verbs participating in the formation of the permissive construction need not form a syntactic constituent; in example 7, the verbs likhne and diyaa are not even next to each other.</Paragraph>
    <Paragraph position="6"> This shows that complex predicate formation cannot be analyzed as taking place in the lexicon; a method of dynamically creating a complex predicate in the syntax is needed. That is, sentences such as 7 have, in essence, two syntactic heads, which dynamically combine to produce a single syntactic argument structure.</Paragraph>
    <Paragraph position="7"> We claim that the function of a verb such as permissive diyaa is somewhat analogous to that of a modifier: diyaa consumes the meaning of the original verb and its arguments, producing a new permissive meaning and requiring an additional argument, the permitter. Mapping principles apply to this new, augmented argument structure to associate the new thematic argument structure with the appropriate set of syntactic roles. We illustrate the derivation of the meaning of example 7 in Figure 7.</Paragraph>
    <Paragraph position="8"> The lexical entries necessary for example 7 can be found in Figure 6. The instantiated information from these lexical entries appears in the first five lines of Figure 7. Mapping principle (2) in Figure 2, abbreviated as mapping2, links the permitter, agent, and theme of the (derived) argument structure to the syntactic arguments of a permissive construction; the mapping principle is given in the sixth line of Figure 7. 6 The premises of the derivation are, as above, information given by lexical entries and the mapping principle. By means of mapping principle mapping2, information about the possible array of thematic roles required by the complex predicate let-write can be eRecall that in our framework, all the mapping principles are present to be used as needed. In the derivation of the meaning of example 7, shown in Figure 7, we have omnisciently provided the one that will be needed.</Paragraph>
    <Paragraph position="9">  derived; this step uses Universal Instantiation and Modus Ponens.</Paragraph>
    <Paragraph position="10"> Next, a (preliminary) meaning for f-structure fs, write(Bill, letter), is derived by Universal Instantiation and Modus Ponens. At this point, the requirements imposed by diyaa 'let', labeled let, are met: a permitter (Hillary) is present, and a complete meaning for f-structure f5 has been produced.</Paragraph>
    <Paragraph position="11"> These meanings can be consumed, and a new meaning produced, as represented in the final line of the derivation. Again, this meaning is the only one available, since completeness and coherence obtains only when all requirements are fulfilled and no extra information remains. As with the case of modifiers, the final step provides the only complete and coherent meaning derivable for the utterance.</Paragraph>
    <Paragraph position="12"> Notice that the meaning of the complex predicate is not derived by composition of verb meanings: the permissive verb diyaa does not combine with the verb likhne 'write' to form a new verb meaning. Instead, permissive diyaa requires a (preliminary) sentence meaning, write(Bill, letter) in the example above, in addition to the presence of a permitter argument.</Paragraph>
    <Paragraph position="13"> More generally, this approach treats linguistic phenomena such as modification and complex predicate formation function by operating on semantic entities that have combined with all of their arguments, producing a modified meaning and (in the case of complex predicate formation) introducing further arguments. While it would be possible to extend our approach to operate on semantic entities that have not combined with all their arguments, we have not yet encountered a compelling reason to do so. Our current restriction is not so confining as it might appear; most operations that can be performed on semantic entities that have not combined with all their arguments have analogues that operate on fully combined entities. In further research, we plan to explore this characteristic of our analysis more fully.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML