File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/93/p93-1011_metho.xml
Size: 34,494 bytes
Last Modified: 2025-10-06 14:13:29
<?xml version="1.0" standalone="yes"?> <Paper uid="P93-1011"> <Title>NP VP Det N' V NP Det A</Title> <Section position="4" start_page="78" end_page="79" type="metho"> <SectionTitle> SCOPE DISAMBIGUATION FACTORS </SectionTitle> <Paragraph position="0"> Most proposals on scope disambiguation were developed to account for the general preference of the leftmost quantified phrase from taking wide scope in simple active sentences like (7): (7) Every kid climbed a tree.</Paragraph> <Paragraph position="1"> Lakoff \[27\] proposed that this preference is due to the fact that sentences are parsed from left to right; &quot;every kid&quot; takes scope over &quot;a tree&quot; because it is processed first. (Kurtzman and MacDonald called this the Left to Right principle.) Ioup \[20\] argued instead that &quot;...in natural language, order has little to do with the determination of quantifier scope.&quot; (\[20\], p.37). The preferred reading of (8), for example, is the one in which the NP &quot;each child&quot; takes wide scope.</Paragraph> <Paragraph position="2"> (8) I saw a picture of each child. \[20\] According to Ioup, the relative scope of quantifiers is determined by the interaction of two factors. First of all, quantifiers such as &quot;each&quot; or &quot;the&quot; have the inherent property of taking wide scope over indefinites, which, in turn are lexically marked to take scope over plural quantifiers like &quot;all.&quot; This hypothesis is motivated by contrasts such as those in (9), and accounts for cases such as (8). 4 (9) a. I saw a picture of each child.</Paragraph> <Paragraph position="3"> b. I saw a picture of all the children.</Paragraph> <Paragraph position="4"> Secondly, Ioup proposed that a hierarchy exists among grammatical functions, such that listeners tend to attribute to NPs in subject position wide scope over NPs in indirect object position, which in turn tend to take wide scope over NPs in object position. The hierarchy between grammatical functions accounts for the preferred reading of (7).</Paragraph> <Paragraph position="5"> Ioup also observed that NPs in topic position tend to take wide scope, This is especially obvious in languages that have a specific grammatical category for topic, like Japanese or Korean. The Japanese sentence (10b) is ambiguous, but the reading in which the NP in subject position, &quot;most students&quot; takes scope over the NP in object position, &quot;every language,&quot; is preferred. This preference is maintained if the ')Van Lehn \[35\] and Hendrix \[14\] also studied the effect of lexical preferences, or 'strengths' as they are also called.</Paragraph> <Paragraph position="6"> NP in object position is scrambled in sentence-initial position, as in (10c) (another counterexample to Lakoff's left-to-right principle). If, however, the NP is marked with the topic-marking suffix &quot;wa,&quot; as in (10d), suddenly the preferred reading of the sentence becomes the one in which &quot;every language&quot; takes wide scope. 5 (I0) a. Most students speak every language.</Paragraph> <Paragraph position="7"> b. Hotondo-no gakusei-ga subete-no gengo-o hanasu most-gen student-nora every language-ace speak c. Subete-no gengo-o hotondo-no gakusei-ga hanasu every language-ace most-gen student-nora speak d. Subete-no gengo-wa hotondo-no gakusei-ga hanasu every language-TOP most-gen student-nora speak Several proposals attribute an important role to structural factors in assigning a scope to operators. Jackendoff \[21\] and Reinhart (\[32\], ch. 3 and 9) propose to account for the preferred reading of (7) by means ofa C-commandprinciple according to which a quantified expression is allowed to take scope over another quantified expression only if the latter is c-commanded by the former at surface structure.</Paragraph> <Paragraph position="8"> Structural explanations (in the form of constraints on syntactic movement) have also been proposed to explain the constraint that prevents a quantifier to take scope outside the clause in which it appears, first observed by May \[28\] and called Scope Constraint by Heim \[13\]. This constraint is exemplified by the contrast in (11): whereas (lla) has a reading in which &quot;every department&quot; is allowed to take wide scope over &quot;a student,&quot; this reading is not available for (llb).</Paragraph> <Paragraph position="9"> (11) a. A student from every department was at the party.</Paragraph> <Paragraph position="10"> b. A student who was from every department was at the party.</Paragraph> <Paragraph position="11"> Lexical semantics and commonsense knowledge also play an important role in detemaining the scope of operators. The contrast between the preferred readings of (12a) and (12b) can only be explained in terms of lexical semantics: (12) a. A workstation serves many users.</Paragraph> <Paragraph position="12"> b. A workstation can be found in many offices.</Paragraph> <Paragraph position="13"> Kurtzman and MacDonald \[26\] set out to verify the empirical validity of several of these principles. The most crucial result is that none of the principles they set to verify can account for all the observed effects, and actually counterexamples to all of thenv--including the quantifier hierarchy-can be found. No evidence for a Left-to-Right processing principle was found. Kurtzman and MacDonald hypothesize that &quot;...processes that are not strictly dedicated to the interpretation of scope relations may nonetheless influence the interpretation of quantifier scope ambiguities.&quot; (\[26\], p.22). They conclude that &quot;...the results leave open the question of whether the building and selection of representations of scope are mandatory processes&quot; (\[26\], p.45). 6 5Arguably, the closest thing to an explicit topic marker in English are certain uses of definite descriptions and the topicalization construction; in both cases, the topically marked NP tends to take</Paragraph> </Section> <Section position="5" start_page="79" end_page="80" type="metho"> <SectionTitle> OVERVIEW OF THE PROPOSAL </SectionTitle> <Paragraph position="0"> Scope Disambiguation as Construction of an Event Structure It is commonly assumed in the psycholinguistic literature on sentence interpretation that hearers interpret sentences by constructing a model of the situation described by the sentence \[10, 22\]. I propose that the scope assigned to the operators contained in a sentence is determined by the characteristics of the model construction procedure. The model being constructed, which I call event structure, consists of a set of situation descriptions, one for each operator, together with dependency relations between them. The task of the model construction procedure is to identify these situations and to establish dependency relations. The scope assigned by a hearer to an operator depends on the position of the situation associated with that operator in the event structure. For example, I propose that the scope assigned to quantitiers depends on how their resource situation \[3, 8\] is identiffed. It is well-known that a sentence like (13): (13) Everybody is asleep.</Paragraph> <Paragraph position="1"> is not interpreted as meaning that every single human being is asleep, but only that a certain contextually relevant subset is. The process of identifying the set of individuals over which an operator quantifies is usually called domain restriction. In the case of, say, (7) whether &quot;every kid&quot; or&quot;a tree&quot; takes wide scope depends on how the listener builds a model of the sentence. If she starts by first identifying a situation containing the group of kids that&quot;every&quot; is quantifying over, and then proceeds to 'build' for each of these kids a situation which contains a tree the kid is climbing, then &quot;every kid&quot; will take wide scope. In other words, I propose that a listener has a preferred reading for a sentence if she's able to identify the resource situation of one or more of the operators in that sentence ('to picture some objects in her mind'), and to hypothesize dependency relations between these situations. If this process cannot take place, the sentence is perceived as 'ambiguous' or 'hard to understand.' The less context is available, the more the establishment of dependency relations between situations depends on the order in which the model is built, i.e., on the order in which the situations associated with the different operators and events are identified. This order depends in part on which NPs are perceived to be 'in topic,' and in part on general principles for building the conceptual representation of events (see below). In addition, some operators (e.g., definite descriptions) impose constraints on their resource situation. null A Model Construction Procedure: The DRT Algorithm In order to make the intuition more concrete we need the details of the model construction procedure. Ideally, one would want to adopt an existing procedure and show that the desired results fall out automatically. Unfortunately, the model construction procedures presented in the psycholinguistic literature are not very detailed; often it's not even clear what these researchers intend as a model. There is, however, a discourse interpretation procedure that is specified in detail and has some oftbe characteristics of the model construction procedure I have in mind; I'm thinking of the DRS construction algorithm \[23, 24\].</Paragraph> <Paragraph position="2"> The DRS construction algorithm consists of a set of rules that map discourses belonging to the language into certain &quot;interpretive structures&quot;. The output structures are called &quot;Discourse Representation Structures&quot; or &quot;DRSs.&quot; A DRS is a pair consisting of a set of discourse referents and a set of conditions (= predicates on the discourse referents). The construction algorithm works by first adding the syntactic structure of the sentence to the 'root' DRS representing the discourse up to that point, then applying the rules to the syntactic structure, thus adding discourse referents and conditions to the DRS. Consider how the algorithm is applied to obtain an interpretation for (7): (14) Every kid climbed the tree.</Paragraph> <Paragraph position="3"> The initial interpretation of (14) is the tree shown in (15).</Paragraph> <Section position="1" start_page="80" end_page="80" type="sub_section"> <SectionTitle> Det A </SectionTitle> <Paragraph position="0"> Every kid climbed a tree The DRS construction role for definites and universal quantification are as follows: (Definite Descriptions)When a syntactic configuration containing a definite NP is met in a DRS K, 1. Add a new discourse referent x to the root DRS, 2. Add a new condition to the root DRS representing the restriction on the indefinite NP, 3. Replace the NP with x in the syntactic configuration. (Universal Quantification) When a syntactic configuration containing an NP with determiner &quot;every&quot; is met in a DRS K, 1. Add a complex condition KI ~ 1(2 to K, 2. Add a new discourse referent x to K~, 3. Add a new condition to K1 representing the restriction on the indefinite NP, 4. Replace the NP with the discourse referent in the syntactic configuration, 5. Move the syntactic configuration insider K2. Both the rule for definites and the rule for universal quantification are triggered by (15). Two hypotheses are obtained; that obtained by applying first the rule for definite descriptions is shown in (16). Both of these hypothesis contain operators whose DRS construction roles haven't been applied yet: this algorittun comes with a built-in notion of partial hypothesis--a paltial hypothesis is a DRS some of whose operators still have to 'interpreted' in the sense just men-</Paragraph> <Paragraph position="2"> Every kid climbed The two partial hypotheses are made into complete hypotheses by applying the remaining rules; the complete hypothesis with the definite taking wide scope is shown in (17).</Paragraph> <Paragraph position="4"> Modifying the DRS Construction Algorithm Because the DRS construction rules depend on syntactic patterns, the role of structural factors in disambiguatiou can be taken into account--and a lot of data about disambiguation preferences can be explained without any further machinery. The Scope Constraint, for example, is embedded in the very semantics of DRT; and one can 'build in' the construction rules principles such as the c-command principle.</Paragraph> <Paragraph position="5"> (Kamp and Reyle do just that in \[24\].) The limitations of this approach are shown by examples in which the choice of an interpretation does not depend on the structure, like (12). Also, the rule for definites as just formulated is too restrictive: in cases like (18), for example, predicts the correct reading for the definite NP''the meeting,&quot; but the wrong one for &quot;the principal,&quot; that, intuitively, takes narrow scope with respect to &quot;every school:&quot; (18) Every school sent the principal to the meeting.</Paragraph> <Paragraph position="6"> I propose that the role of lexical semantics, as well as the data accounted for in the literature by introducing principles such as the grammatical function hierarchy, the topic principle, and the quantifier hierarchy, can be accounted for by making the activation of the DRS construction rules depend on factors other than the syntactic structure of the sentence.</Paragraph> <Paragraph position="7"> The factors I propose to incorporate are (i) the semantics of lexical items, (ii) the results of the interpretation of operators in context, and (iii) the way the representation of events is built in memory.</Paragraph> <Paragraph position="8"> In order to achieve this goal, I propose two main modifications to the standard DRS construction algorithm. First of all, I propose that the input to the algorithm is a logicalform--a structure isomorphic to the s-structure, that carties however information about the semantic interpretation of lexical items. In this way, the role of semantic factors in interpretation can be taken into account; in addition, a semantic value can be assigned to a representation containing unresolved conditions or partial hypotheses. Secondly, I propose to make the application of the DRS construction rules depend on the identification of certain contextually dependent elements of the interpretation. The ingredients of the account thus include: a proposal about the input to the model construction procedure; a notion of what an event structure is; and an account of discourse interpretation. I discuss these issues in turn in the next sections.</Paragraph> </Section> </Section> <Section position="6" start_page="80" end_page="82" type="metho"> <SectionTitle> THE LOGICAL FORM </SectionTitle> <Paragraph position="0"> As said above, the first difference between the interpretation procedure proposed here and the DRS construction algorithm illustrated above is that the rules I propose rely on semantical and contextual factors. I propose to do this by adding to standard DRT a new class of conditions, that I call 'logical forms.' Logical forms include semantic information about the lexical items occurring in the sentence. The logical form representation is the interface between the parser and the model construction algorithm, and can be compositionally obtained by a GPSG parser \[11, 18\] that couples a context-free grammar with rules of semantic interpretation. I first describe the language used to characterize the semantics of lexical items, SEL (for Simple Episodic Logic), then the syntax and interpretation of logical forms.</Paragraph> <Paragraph position="1"> to assign to (18), repeated here for convenience: (18) Every school sent the principal to the meeting.</Paragraph> <Paragraph position="2"> The truth conditions usually assigned to (18) in a language with restricted quantification, and ignoring tense, are shown in (19); I propose instead to assign to (18) the interpretation specified by (20).</Paragraph> <Paragraph position="3"> (20) reads: there exists a unique m that is a meeting in a contextually specified resource situation s'l, and for all s's that are schools in a contextually specified resource situation ~2 the unique p such that p is the principal of s participates to m. The intent of the expression used for the quantifier restrictions in (20) is to make it explicit that the situations from which the quantified dements are 'picked up' need not be the complete set of objects and relations at which the truth of (20) is evaluated. This is accomplished by introducing into the language an explicit relation ~ ('supports') to represent 'truth at a situation' \[8\]. A statement of the form Is1 MEWrING(X)\] evaluates to true in a situation s if the object--say, m-assigned to the variable x is a meeting in the situation s 1. A situation is a set of objects and facts about these objects \[8, 18\]. I assume a language which allows us to make statements about situations, and an ontology in which situations are objects in the universe. Episodic Logic provides such a language and such an ontology \[19, 18\]; where not otherwise noted, the reader should assume that an expression of SEL has the semantics of the identical expression in Episodic Logic.</Paragraph> <Paragraph position="4"> The restriction of the existential quantifier in (20) contains a parameter ~. Parameters are used in SEL to translate anaphoric expressions of English. A parameter behaves semantically as an open variable, a value for which has to be provided by context. 7 I have assumed the following translations for the lexical items &quot;every,&quot; &quot;meeting,&quot; and &quot;sent&quot; (I have again ignored tense): &quot;every&quot; -,-+ )~ P 3. Q (V x \[s'i ~ P(x)\] Q(x))</Paragraph> <Paragraph position="6"> The semantics assigned to definite descriptions needs a bit of an explanation. According to the location theory \[12, 4\] the major uses of definite NP's, as well as the contrast between definites, indefinites, and demonstratives, can be accounted for by stipulating that a speaker, when using a definite article, 1. instructs the hearer to locate the referent in some shared set of objects, and 2. refers to the totality of the objects/mass within this set that satisfy the restriction.</Paragraph> <Paragraph position="7"> I formalize this idea in \[301 by associating to definite descriptions the translation below. A situation is 'shared' between x and y if every fact * supported by that situation is mutually believed by x and y (see \[301 for details). &quot;the meeting&quot; -,~ )~ P (the x: (\[S ~ MEETING(X)\] A</Paragraph> <Paragraph position="9"> Syntax and Interpretation of the Logical Form The translations seen above, together with the obvious context-free roles, result in the following LF for (18) (I have 7See \[29\] for details. The idea is to add to the parameters of evaluation an anchoring function a that provides the values for parameters, thus plays the role of 'context' in Helm's proposal. The reader should be aware that while the notation and terminology I have adopted is borrowed from Situation Theory, parameters have a different semantic interpretation there \[8\].</Paragraph> <Paragraph position="10"> used here, and elsewhere in the paper, a linear notation to save space): node of (21) is labeled with a phrase category; the leaves are labeled with expressions of the form 'a, where a is an expression of SEL (and has therefore a 'standard' model theoretic denotation). I use the phrase structure system largely adopted in the Government and Binding literature, according to which the sentence is the maximal projection of an Infl node and is therefore labeled IP \[34\]. I also assume the existence of a maximal projection of complementizer CP above IP. Because I don't discuss relatives here, I use the following simplified notation for NPs with determiners, such as &quot;every school&quot;: \[NP '~- Q (V x \[Sl ~ SCHOOL(x)\] Q(x))\] LFs like (21) are usually treated in the natural language processing literature as uninterpreted data structures from which to 'extract' the readings \[16, 17\]. However, it has been recently proposed \[31, 2, 33\] that it is possible (and indeed desirable) to assign a denotation to expressions like (21). The reason is that in this way one can define a notion of sound inference --that is, one can specify what can and cannot properly be inferred from an expression like (21) prior to disambiguation; and therefore, a notion of 'monotone disambiguation.' I do not assume disambiguation to work monotonically, but I want to be able to treat expressions like (21) as full-fledged conditions so that a DRS containing a condition of this kind can be interpreted, and I need to be able to characterize a disambiguation step as compatible in the sense that it does not introduce any new readings. To do this I need LFs to have an interpretation.</Paragraph> <Paragraph position="11"> Were it not for the problem that more than one interpretation can be associated to a single LF, one could easily define a recursive mapping EXT from logical forms to truththeoretical denotations (functions from situations to lluth values) in temxs of the usual \[\[ \[\[ function, as follows:</Paragraph> <Paragraph position="13"> Once this is done, one can reformulate the semantics of DRS in terms of situations and situations extensions instead of embeddings and embedding extensions, and interpret all conditions as functions from situations to truth values. (See \[29\] for details.) Matters get more complicated when expressions with more than one reading like (21) are considered. Different ways for assigning a denotation to expressions with more than one interpretation have been proposed \[2, 31\]; my proposal derives from \[31\]. I use a Cooper storage mechanism \[5\] to define EXT in such a way as to allow for an LF to have more than one 'indirect interpretation.' Briefly, Cooper's idea is to have a syntactic tree denote a set of sequences, each sequence representing a distinct 'order of application' in computing the interpretation of the sentence. For example, because in interpreting (22) one can either apply the translation of tense immediately or walt, EXT maps (22) in a set of two sequences, shown in (23).</Paragraph> <Paragraph position="14"> (22) \[V&quot; 'P \[NP '~. Q (det x R(x)) Q(x)\] \]</Paragraph> <Paragraph position="16"> ing Cooper storage, that is rather complex. For the current purposes, it is enough to understand that EXT associates to</Paragraph> <Paragraph position="18"> Having done this, we can say that a DRS condition like (21) is verifies the current situation s if one of the functions denoted by (21) maps s into 1.</Paragraph> </Section> <Section position="7" start_page="82" end_page="84" type="metho"> <SectionTitle> BUILDING EVENT STRUCTURES </SectionTitle> <Paragraph position="0"> Not all assertions in a narrative or conversation are going to be about the same situation. In the conversations with the TRAINS system, for example, the participants can discuss both the state of the world and the state of the plan being developed. Maintaining this separation is crucial for the proper interpretation of definite descriptions, for example. The separation between the situations that are the topic of different sentences is achieved by translating sentences as situation descriptions. A situation description is a condition of the form:</Paragraph> <Paragraph position="2"> whose intuitive interpretation is that * provides a partial characterization of the situation s. The semantics of situation descriptions is defined as follows, using a semantics of DRSs in terms of situation extensions, as discussed in the previous section, and interpreting discourse markers as constituents of situations: The condition s:K is satisfied wrt the situation s' iffK is satisfied wrt the value assigned to s in s '.</Paragraph> <Paragraph position="3"> I also propose the following constraint on the model construction rules: Constraint on Interpretation : with the exception of the discourse markers interpreted over situations and of the situation descriptions, every discourse marker and condition has to be part of a situation descriptions.</Paragraph> <Paragraph position="4"> Situation descriptions are added to the model by rules triggered by an LF whose root is a CP node. The rules (now shown for lack of space) delete the complementizer and its whole projection, and introduce a situation structure. The result is shown in (26).</Paragraph> <Paragraph position="5"> The conslraint on discourse interpretation proposed above is implemented by forcing the rules that build situation structures to be triggered before any other rule; this is done by having every other rule being triggered by LFs whose root node is an IP. The result of this constraint is that a discourse model consists of a set of situation descriptions: (27) s:~-~ The DRSs produced by the standard DRT algorithm are semantically equivalent to the special case of a set of situation descriptions all describing the same situation s.</Paragraph> <Paragraph position="6"> Models like the one in (27) enable the formalization of processes of resource situation identification like that described in \[30\]. I illustrate how my rules for interpreting operators differ from those of standard DRT, and how the interaction between model construction rules and discourse interpretation works, by means of the model construction rule for definites. The rule MCR-DD is triggered by the configuration in (28), and results in the configuration in (29). The notation used for the pattern indicates that this mle applies to a definite NP in any position within a syntactic tree whose maximal projection is an IP node, without any intervening The key observation is that the application of this rule, as well as of any other NP rule, depends on the hearer's previous identification of a resource situation for the definite description. The statement ANCHOR('~, s') constraining the interpretation of & is added to the situation structure by the processes that identify the referent of the definite description; I describe these processes in detail in \[30\]. 8 Finally, I propose that, when context is missing, a default model construction procedure operates. It has been suggested \[6\] that the conceptualization of events follows an order reflected in the thematic hierarchy AGENT < LOCA-TION, SOURCE, GOAL < THEME proposed to account for phenomena like passivization \[21\]. Briefly, the idea is that 'the normal procedure for building an event description' is to follow the order in the hierarchy: first identify the agent, then the location, then the theme. This proposal can be formalized in the current framework by having rules that operate in case no other rule has, and that modify the model by introducing a resource situation for an operator and establishing anchoring connections. These rules depend both on the semantics of the verb and on the syntactic configuration. The rule that identifies the AGENT, for example, is triggered by the configuration in (30), and results in the configuration in (31), that allows for the rule for the NP to operate in that the resource situation of the operator has been anchored: 8A more conventional situation-theoretic framework is used there, but the analysis carries over to the framework in this paper. These roles can of course originate conflicts with the resuits of other discourse interpretation processes. I assume the following conflict resolution rule: when two rules produce conflicting hypothesis, assume the result of the more specific rule. In general, the discourse interpretation rules are more specific than the default rules for constructing events representations, so they will be preferred.</Paragraph> <Paragraph position="7"> Although lack of space prevents me from giving exampies, rules relating the construction of the model to lexical semantics, such as those accounting for data like (12), can also be formulated.</Paragraph> </Section> <Section position="8" start_page="84" end_page="85" type="metho"> <SectionTitle> AN EXAMPLE </SectionTitle> <Paragraph position="0"> We can now discuss in more detail the process of disambiguation of (18). I have presented the logical form for (18) above, as (21).</Paragraph> <Paragraph position="1"> (18) Every school sent the principal to the meeting.</Paragraph> <Paragraph position="2"> After identifying the situation descriptions, various interpretation processes take place, like those performing definite description interpretation described in \[30\]. These processes generate hypotheses about the anchoring of resource situations. Without entering into details, I assume that the context for (18) is provided by (32), that introduces into the model the situation description in (33), containing a group of schools and a meeting.</Paragraph> <Paragraph position="3"> Given this context, the discourse interpretation processes identify s as the resource situation for the NPs &quot;every school&quot; and &quot;the meeting.&quot; However, no unique principal can be identified in s. The activation of the model construction rules for universal quantification and definite descriptions results in the partial model in (34), in which ~1 and s'2 have Th, model construction role applied to the universal &quot;every school&quot; introduces a complex condition K1 ---> I(2 as usual, but both the restriction and the nuclear scope include situation descriptions. The situation description in the restriction, s2, is a subsituation of the situation at which the restriction is evaluated (denoted by the indexical constant THIS_SITUATION). The situation description in the nuclear scope, s3, is an extension of s2.</Paragraph> <Paragraph position="4"> Now that a situation description for the resource situation of the universal and a discourse marker for the school have been introduced (s2 and z, respectively), the roles for resolving the parametric component Jc of the interpretation of&quot;the principal&quot; can apply. The result is that z is chosen as antecedent of +-, and s2 is chosen as the resource situation for &quot;the principal.&quot; The model construction role updates s3 accordingly; the resulting event structure is equivalent to the interpretation of (21) specified by (20).</Paragraph> </Section> <Section position="9" start_page="85" end_page="85" type="metho"> <SectionTitle> ACCOUNTING FOR THE DISAMBIGUATION DATA </SectionTitle> <Paragraph position="0"> I briefly retum here the disambiguation principles, to show how the proposal just presented accounts for them. First of all, I'll note that, under simple assumptions about the mapping between grammatical functions and theta-roles, there is a striking resemblance between the grammatical function hierarchy proposed by Ioup and the thematic hierarchy proposed by Jackendoff to account for facts about passives and reflexives. The facts accounted for by the grammatical function hierarchy principle can also be explained if we assurm that the description of an event is constructed by identifying the filler of each thematic role in the order specified by Jackendoff's thematic hierarchy.</Paragraph> <Paragraph position="1"> Consider now the case of the other disambiguation factor proposed by Ioup, the lexically encoded preference for certain operators to take wide scope. Definite descriptions are the paradigmatic case of an operator that tends to take wide scope. This preference can be explained in terms of the model construction hypothesis as follows. The choice of a resource situation for definite descriptions is restricted by the constraint that this resource situation be either shared among the conversational participants, or related to shared knowledge by shared relations \[12, 4\]. In our dialogues, for example, definite descriptions are usually interpreted with respect to the 'situation' corresponding to the current visual scene, which is independent from other situations. It follows that a definite description will be assigned narrow scope relative to another operator only if (i) the resource situation of the definite is perceived to depend on this other resource situation, and (ii) this dependency relation is known to be shared.</Paragraph> <Paragraph position="2"> As for the tendency for NPs in topic to take wide scope, an element of a sentence is said to be in topic if it is considered to be part of the background information on which the new information in the sentence depends. As the interpretation of the 'new' infonnation in the sentence depends on the background information, it is plausible to assume that, in constructing a model for the sentence, the listener begins by applying the model construction roles for the operators perceived to be in topic (or explicitly marked as being in topic, in the case of Japanese). The interpretation of the operators not in topic, when determined at all, will depend on the interpretation of the operators in topic, resulting in the dependency relations between the related situations that I have assumed to be the way scope is represented.</Paragraph> <Paragraph position="3"> Finally, I'll note that, in the absence of contextual clues, whether a completely disambiguated event structure is actually constructed depends on how strong the model construction roles are supposed to be; it's perfectly possible that the activation of these rules is controlled by additional factors, such as the specific needs of a task to be performed.</Paragraph> </Section> <Section position="10" start_page="85" end_page="85" type="metho"> <SectionTitle> ACKNOWLEDGMENTS </SectionTitle> <Paragraph position="0"> I wish to thank my advisor Len Schubert and James Allen, Howard Kurtzman, Peter Lasersohn, and Uwe Reyle for several suggestions, technical help, and constructive criticism.</Paragraph> <Paragraph position="1"> This work was supported by the US Air Force - Rome Lab-</Paragraph> </Section> class="xml-element"></Paper>