File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/w06-1512_metho.xml

Size: 13,878 bytes

Last Modified: 2025-10-06 14:10:42

<?xml version="1.0" standalone="yes"?>
<Paper uid="W06-1512">
  <Title>Semantic Interpretation of Unrealized Syntactic Material in LTAG</Title>
  <Section position="4" start_page="0" end_page="0" type="metho">
    <SectionTitle>
2 LTAG Semantics with Semantic Uni-
</SectionTitle>
    <Paragraph position="0"> fication In LTAG framework (Joshi and Schabes 1997), the basic units are (elementary) trees, which can be combined into bigger trees by substitution or adjunction. LTAG derivations are represented by derivation trees that record the history of how the elementary trees are put together. Given that derivation steps in LTAG correspond to predicate-argument applications, it is usually assumed that LTAG semantics is based on the derivation tree, rather than the derived tree (Kallmeyer and Joshi 2003).</Paragraph>
    <Paragraph position="1"> Semantic composition which we adopt is based on LTAG semantics with semantic unification (Kallmeyer and Romero 2004). In the derivation tree, elementary trees are replaced by their semantic representations and corresponding feature structures. Semantic representations are as defined in Kallmeyer and Joshi 2003, except that they do not have argument variables. These representations consist of a set of formulas (typed lexpressions with labels) and a set of scope constraints. null Each semantic representation is linked to a feature structure. Feature structures, as illustrated by different examples below, include a feature i whose values are individual variables and features p and MaxS, whose values are propositional labels. Semantic composition consists of feature unification. After having performed all unifications, the union of all semantic representations is built.</Paragraph>
    <Paragraph position="2"> Consider, for example, the semantic representations and feature structures associated with the elementary trees of the sentence shown in (3).</Paragraph>
    <Paragraph position="3">  Semantic composition proceeds on the derivation tree and consists of feature unification:</Paragraph>
    <Paragraph position="5"> [i: x] [i: y] Performing two unifications, v1=x, v2=y, we arrive at the final interpretation of this sentence: l1: date(x, y), bill(y), mary(x). This representation is interpreted conjunctively, with free variables being existentially bound.</Paragraph>
    <Paragraph position="6"> Quantificational NPs are analyzed as multi-component TAGs, where the scope part of the quantifier introduces the proposition containing the quantifier, and the predicate-argument part introduces the restrictive clause (see Kallmeyer  and Joshi 2003).</Paragraph>
    <Paragraph position="7"> (5) Every student likes some course . S* S  NP [i: x, p: P1] NP VP [p: l1, i: v1] every student likes NP</Paragraph>
    <Paragraph position="9"> Some course Final representation The final representation of this sentence is underspecified for scope, given that there are no constraints which restrict the relative scope of every and some. In order to obtain one of the readings, a disambiguation mapping is needed: Disambiguations:</Paragraph>
    <Paragraph position="11"> every(x, student(x), some(y, course(y), like(x, y)) Disambiguations are functions from propositional variables to propositional labels that respect the scope constraints, such that after having applied this mapping, the transitive closure of the resulting scope is a partial order.</Paragraph>
  </Section>
  <Section position="5" start_page="0" end_page="92" type="metho">
    <SectionTitle>
3 The Problem of Ellipsis Resolution in
LTAG semantics
</SectionTitle>
    <Paragraph position="0"> Given LTAG semantics, there are two possible approaches to resolution of the elided material: reconstruction can be done as part of the unification process or as part of the disambiguation procedure. If reconstruction was done as unification, the semantic representation of the elided material would be disambiguated in the final representation. On the other hand, it is well known that resolution of ellipsises and gaps can be ambiguous. For example, the sentence in (6), discussed in Siegel 1987 and Johnson 2003 among others,  As this example shows, the gap in (6) can be reconstructed by selecting either the verb or the negated modal as its antecedent. The two interpretations represent different scope readings between the conjunction and negation, which should be analyzed as underspecified in LTAG semantics. Resolution of gaps, therefore, cannot be done as part of unification, since it depends on the disambiguated interpretation. The question is whether it is possible to define an underspecified representation of these two readings, and what kind of resolution mechanism can be used to disambiguate these interpretations? 1 Other cases of ambiguous interpretations of the elided material are discussed in section 7.</Paragraph>
    <Paragraph position="2"/>
  </Section>
  <Section position="6" start_page="92" end_page="92" type="metho">
    <SectionTitle>
4 LTAG Semantics of Gapping
</SectionTitle>
    <Paragraph position="0"> In LTAG semantics, semantic representations are introduced by lexicalized trees. In order to account for the analysis of gapping and VP ellipsis, this paper proposes that semantics should be defined on both lexicalized and non-lexicalized trees. Specifically, we propose that Interpretation of a gap (or elided VP) is the semantic interpretation of a non-lexicalized S tree.</Paragraph>
    <Paragraph position="1"> The semantic representations of lexicalized S trees under this new approach are derived compositionally, given the meaning of a nonlexicalized S tree and the meaning of a verb.</Paragraph>
    <Paragraph position="3"> [Ag: v, Pat: u, MaxS: C] l2: lulv.C (v2)(v1) Non-lexicalized trees introduce a propositional label and a propositional variable, illustrated by l2 and C above. If a tree is a transitive S-tree, there are two lambda bound variables, which correspond to the Agent and Patient features of the verb. Performing feature unifications (v3=v, v4=u,C1=C) and scope constraint disambiguations (C-&gt;l0), the proposition l2 will be reduced to: lu.lv.date(v, u)(v2)(v1)= date(v1, v2).</Paragraph>
    <Paragraph position="4"> Given this proposal, we suggest that the semantics of gaps, VPE and other types of elided material is introduced by non-lexicalized trees.</Paragraph>
    <Paragraph position="5"> For example, the analysis of the sentence in (2) is shown in (7). Performing feature unifications (l2=P1, l3=P2, v=v1=v2, u=u1=u2, C=C1=C2) yields the final representation, where l2 and l3 are underspecified. There is only one disambiguation of the variable C in this sentence: C -&gt; l0, which gives us the desired interpretation of the sentence: null</Paragraph>
    <Paragraph position="7"> Resolution of the gap in this sentence is enforced by the feature structure of 'and', which unifies MaxS as well as Agent and Patient features. This analysis therefore accounts for the fact that gapping &amp;quot;is intimately entangled with the syntax of coordination (as opposed to VP ellipsis)&amp;quot; (Johnson 2003). On the other hand, as the next example illustrates, it is crucial that propositional variables introduced by non-lexicalized trees are not unified during semantic composition, but rather are identified with their antecedents as part of the disambiguation procedure. null  The sentence in (8), shown below, differs from the previous one in the presence of a negated modal. The interpretation of this modal introduces a proposition l9: can't(N9) and a constraint</Paragraph>
    <Paragraph position="9"> l0, the final representation has two constraints on the variable l0: l0[?] C and l0[?] N9, and therefore two possible disambiguations. In the disambiguation 1, C is mapped to l0, introduced by the verb  'eat', and propositions l2 and l3 are reduced to eat(x, y) and eat(z, w). In the disambiguation 2, the variable C is mapped to l9, introduced by the modal, and l2 and l3 are reduced to can't(eat(x, y)) and can't(eat(z, w)). These disambiguations yield the desired interpretations of this sentence. (8) Ward can't eat caviar and his guests -- dried</Paragraph>
    <Paragraph position="11"> Resolution of gaps under this analysis is done as part of the scope resolution procedure on under-specified representations. A crucial feature of this analysis is that the propositions l2 and l3 are 'underspecified' in the final representation and the variable C is computed during the disambiguation, i.e. when all scope ambiguities are being resolved. In this respect this analysis differs from previous approaches, where the final representation did not include any variables, except for the arguments of quantifiers or other scopal elements.2 2 However, see Babko-Malaya 2004, where a similar analysis is proposed to account for the semantics of coordinated structures with quantified NPs.</Paragraph>
  </Section>
  <Section position="7" start_page="92" end_page="94" type="metho">
    <SectionTitle>
5 LTAG Analysis of VP Ellipsis
</SectionTitle>
    <Paragraph position="0"> The analysis of gapping presented above can be easily extended to the analysis of VP ellipsis.</Paragraph>
    <Paragraph position="1"> VPE differs from gapping in that it is not restricted to coordinated structures. Whereas in the examples above resolution of gaps was enforced by the feature structure of 'and', in the case of VPE, a similar unification, forced by pragmatic constraints, results in recovering the elided material. null As the example in (9) illustrates, our analysis of VPE assumes the following modification of the semantics of non-lexicalized trees: propositions introduced by non-lexicalized trees have one lambda-bound variable, so that each argument is introduced by a separate proposition. For example, the interpretation of a transitive tree below has two propositions l1 and l2, and two propositional variables C1 and C2. The proposition l2 corresponds to the meaning of a VP, which is missing in the standard TAG-based analyses. This decomposition of the meaning of a nonlexicalized tree, therefore, can be independently motivated by the existence of modifiers which predicate of VPs. We further assume that the MaxS feature of the S tree corresponds to the variable introduced by the agent (or the highest-ranked argument).</Paragraph>
    <Paragraph position="2">  This sentence introduces an intransitive tree and one propositional variable C3. This variable is not constrained within the sentence, and parallel to other pro-forms, it gets its interpretation from the previous discourse. Specifically, the interpretation of the second sentence is derived by unification of the S features of the second and the first S-trees in (9): C3=C1, v3=v. Given that C1 is mapped to l2 above, it corresponds to the proposition being reconstructed: C3(=C1) -&gt; l2 l3: lv.like(v, u) (r) = like(r, u)</Paragraph>
  </Section>
  <Section position="8" start_page="94" end_page="95" type="metho">
    <SectionTitle>
6 Scope Parallelism
</SectionTitle>
    <Paragraph position="0"> Many previous approaches impose parallelism constraints on the interpretation of the elided material (e.g. Fox 2000, Asher et al 2001 among others). Under the present analysis, scope parallelism comes for free. Consider, for example, the following sentence discussed in Dalrymple et al 1991, among others, where ambiguity is resolved in the same way in both the antecedent and at the ellipsis site: John gave every student a test, and Bill did too. The final interpretation of the first sentence is given in (10) and has 2 possible disambiguations. null (10) John gave every student a test.</Paragraph>
    <Paragraph position="1"> The surface reading (every &gt;&gt; some) is derived by the following mapping: C3-&gt;l0, C2-&gt;l3, R7-&gt;l8,</Paragraph>
    <Paragraph position="3"> The interpretation of the second sentence is derived by unifying the S-features of the S-trees (as shown in the previous section). As the result, the variables C3 and v3 are unified with the variables C1 and v. Given that C1 is being mapped to the proposition l7 above, C3 is being reconstructed as the proposition every(y, student(y), some(x, test(x), give(v, y, z)) and l3 corresponds to the desired reading of this sentence: (11) Bill did too.</Paragraph>
    <Paragraph position="5"> The inverse reading (where some&gt;&gt;every) can be obtained by the following mapping C3-&gt;l0, C2-&gt;l3, R7-&gt;l8, N7 -&gt; l2, C1-&gt; l5, R5-&gt;l9, N5 -&gt; l7 l2: give(v, y, z) l7: every(y, student(y), give(v, y, z)) l5:some(x,test(x),every(y,student(y),give(v,y, z))) l1: some(x,test(x),every(y,student(y),give(x, y, z))) Now, when the second sentence is interpreted, C3 is unified with C1, which is being mapped to l5: C3(=C1) -&gt; l5. The proposition l3, then, is reduced to: lv.some(x, test(x), every(y, student(y), give(v, y, z))) (r) = some(x, test(x), every(y, student(y), give(r, y, z))) As this example illustrates, scope parallelism follows from the present analysis, given that C3 is unified with a disambiguated interpretation of a VP. It can also be shown that the wide scope puzzle (Sag 1980), shown in (12) is not unexpected under this approach, however, the analysis of this phenomenon is beyond the scope of this paper. 3 (12) A nurse saw every patient. Dr.Smith did too. some(x, nurse(x), every(y, patient(y), see(x, y))) *every(y, patient(y), some(x, nurse(x), see(x, y))) 3 As Hirschbuhler 1982, Fox 2000 among others noted, there are constructions where subjects of VPE can have narrow scope relative to nonsubjects. For example, the sentence A Canadian flag was hanging in front of every building. An American flag was too has a reading in which each building has both an American and a Canadian flag standing in front of it. The existence of such readings does not present a problem for the present analysis, if we adopt an analysis of quantificational NPs proposed in Babko-</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML