File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/79/j79-1027_metho.xml
Size: 35,896 bytes
Last Modified: 2025-10-06 14:11:11
<?xml version="1.0" standalone="yes"?> <Paper uid="J79-1027"> <Title>American Journal of Computational Linguistics Mi crof i C~Q 2 7 A FORMAL PSYCHOLINGUISTIC MODEL OF SENTENCE COMPREHENSION</Title> <Section position="1" start_page="0" end_page="0" type="metho"> <SectionTitle> A FORMAL PSYCHOLINGUISTIC MODEL OF SENTENCE COMPREHENSION PETER REIMOLD </SectionTitle> <Paragraph position="0"> Copyri ht @ 1975 Association for Computational Linguistics This paper outlines a psychologically constrained theory of sentence comprehension The most prominent features of the theory are that: (1) syntactic structure is discarded clause by clause (where the traditional notion of clause is modified in certain respects so as to conform to short term memory requirements); (2) the syntactic and semantic processor work in parallel.</Paragraph> <Paragraph position="1"> The semantic analysis proceeds from the preliminary semantic representation (PSR) via the intermediate SR (ISR) to the final SR (FSR), making crucial use OX an encyclopedia which codes semantic knowledge.</Paragraph> <Paragraph position="2"> The three stages of the semantic analysis are discussed.</Paragraph> <Paragraph position="3"> Concatenation Rules establish the PSR, Meaning Rules and Encyclopedic Rules the ISR, and Semantic Linking Strategies the FSR. At every stage, the semantic representations are in terms of a modified predicate ca~culus notation.</Paragraph> <Paragraph position="4"> Syntax-free as well as syntax-sensitive Linking Strategies are presented for clause-internal linking. Finally, syntax-free linking of constituent clauses of complex sentences is described.</Paragraph> </Section> <Section position="2" start_page="0" end_page="13" type="metho"> <SectionTitle> TABLE OF CONTENTS </SectionTitle> <Paragraph position="0"/> </Section> <Section position="3" start_page="13" end_page="15" type="metho"> <SectionTitle> IV A BIODIFIEB PREDICATE CALCULUS NOTATION FOR SSdANTIC REPRESENTATIONS mm.m.maeeo.ameoaoom...mamm </SectionTitle> <Paragraph position="0"/> </Section> <Section position="4" start_page="15" end_page="30" type="metho"> <SectionTitle> Vm THE PSRB CONCATENATION RULES .o~m.**eaoaaeme*.mmomm 17 VI. THE ISR: SEMANTIC KNOWLEDGE RULES aoemmaaoaoeme*aoa 19 VTI THE -FSR t SEMANTIC LINKING STRATEGIES l l l 6 l a l l l a 3 22 VIIIe DIFFERENT MODES OF PROCESSING a a l a l a moo l a em or l a l rn ma </SectionTitle> <Paragraph position="0"/> </Section> <Section position="5" start_page="30" end_page="30" type="metho"> <SectionTitle> I . SOME PSYCHOLOGICAL CONSTRAINTS ON SENTENCE COMPREHENSION MODELS </SectionTitle> <Paragraph position="0"> In this paper I consider the question of how an automatic sentence recognizer would have to look in order to be compatible wLth present psycholinguistic knowledge about speech comprehmh The basic premise is that psycholinguistic considerations are of potential interest to computational theories (see, e.g., ~chank(1972) ) Let me begin by summarizing some characteristics of speeck processing which we know either from experiments, or which are intuitively clear.</Paragraph> <Paragraph position="1"> First, there is some evidence that the clause is a unit of processing. For instance, ~aplan(1972) showed that after a clause boundary is passed, the cons-bitaent words bf the completed clause are relatively inaccesai'ble, as measured by wmd recognition latency. The effect was independent of the serial position of the word for which recognition time was tested. This suggests that sentences are processed clause by clause, with only the semantic content regularly retained after the clause boundary is passed. The surface words (and g fortiori the syntactic structure) of the clause would tend to be erased after each clause boundary. 1 *Thie paper is based on chapter VII of my doctoral dissertation (~eimold (forthcoming) ) . I wish to thank Thomas G . Bever , Jame-a Higginbotham, and D *Terence Langendoen for helpful suggestions.</Paragraph> <Paragraph position="2"> l~he fortioriJ refers to the fact that the syntactic structure Another study supporting the clause as unit of proceasing is Abrams 6 ~ever(1969). These authors found that reaction time to short bursts of noise &quot;clicks&quot;) superimposed on sentences was longer for clause-final clicks than for clause-initial ones. This would point to the clause as unit of perception, under the assumption that processing is more intensive towards the end of a perce~tual unit, and that reaction time to external stimuli is a valid ineicator of the intensity of internal procesging. or a review of other studies in support of the clausal processing theory, the reader is referred to Podor, Bever & ~arre%t(1974), where arguments are also given for the clause as a decision point across which ambiguitiis are ,normally at least, not carried-. ) Secondly, it seems that as we listen to speech, we simultaneously have access to both the syntactic and semantic propertiee of what we hear. That is, there appears to be ~arallel orocessing of the syntax and the semantics of a clause. One finding explained by this assumption is that so-called &quot;irreversible&quot; passive sentences like (1) are perceptually no more complex thnn their active counterparts (the air1 ~icked the rlower,in this case). By contrast, 'reversiblew passives like (2) take longer to verify vis-a-vis pictures than the ~orre~pondin$ active sentences (~lobin(1.966) ) .</Paragraph> <Paragraph position="3"> presumably contain8 surface wards as terminal nodes. Hence if the eyntax were regularly preserved the surface words should remain easily accessible, too.</Paragraph> <Paragraph position="4"> (1) The flowe~ was picked by the girl. (irreversible) (2) The boy was kicked by the girl. (reversible) It appears that the syntactic complexity introduced by the passive construction is somehow circumvented by a predominantly semantic method of analysis in the case of i&reversible passives m2 We thus get a picture of speech processing as in ~i~~l.3 ~chank(1972), who believes that the function of syntax is &quot;&quot;as a pointer to semantic information rather than as a first step to aemantic analysis?' (~~555) Similarly, winograd (1971) allows parallel operation of syntactic and semantic analysis. However, me syntactic and semantic processor in Winograd's systehr have full power, in minciple, to question each other about their respective success before proceeding with their part pf the analysis* This powerful device has been aeverely restricted in the theory described here (for details, see Reimold (f orthc oming) ) The main reasons for thie are the greater reliance, in my theory, on &quot;syntax-free&quot; semantic interpretation, and the generally shorter life-apan of syntactic structure (see the discussion of &quot;peripheral clauses&quot; below). Woods (1973) also discussee a system with certain facilities for parallel processing, for instance, the &quot;Selnctlve Modifier Placement&quot; facility (sMP)~ The function of SMP is to ~elsct from the list of syn%actically adrnissibLe alternatives the one which is semantically most appropriate, and return only that alternative to the parser before golng an *o analyze the rest of the sentence. The most important difference between Woods' proposal and the one presented here is that his semantic processor only chooaes among tact tructQrsd alternatives (and, in that sense,% a fu%$%$tax-sensltve method), whereas my theory partdates rn gyntactfclm men wh modifiers and their heads.</Paragraph> <Paragraph position="5"> Let me return now to the principle of clause-by-clause processing, If we assume that &quot;imqediate processing&quot; takes place in short term memory, then we must automatically requkre that the unit of processing must not exceed the known Limits of short term memory. Now since that limit is generally taken to be about 5 wordsp the clause-by-clause principle cannot be literally true. For instance. (3) lists some &quot;clauses&quot; longer than 5 words. It seems to me, therefore, that we have to revise the traditional concept of olause* (3)a) John and Bill and Otto stroked and huaaed the goat am the goose.</Paragraph> <Paragraph position="6"> b) The man with the dog with the c~llar,with the bell Laughed, c) John met his friends yelafzerdau morning around ten o'clock ih .a liCtle cafe near DL-U~UW~Y.</Paragraph> <Paragraph position="7"> Z propose to take the underlined phrases in (3) out of the sentence proper and process them as if they were separate clauses. That is, f draw a distinction between the &quot;nucLearH clause and &quot;peripheralw clauses. The non-underlined 'portions in (3) are nuclear clauses. Peripheral clauses include: Prep-clauses (&quot;with the collar&quot;), Comparison-clauses (&quot;than the old colonelw). Post-clauses (&quot;yesterday, 'I &quot;around ten otclo~ 'in a little cafe&quot;), and Coordinate-clauses (&quot;and Bill,&quot; &quot;and hugged&quot;) This treatment of certain phrases as peripheral clauses seems plausible too, if we consider that &quot;adnominal&quot; Prepphrases, for instance, are semantically like relative clauses, as shown in (4), and that adverbs are parallel to certain &quot;adverbialU clauses, as indicated in (5).</Paragraph> <Paragraph position="8"> (4)~qirl { with a green hat who wore a green hat aster the guests left.</Paragraph> <Paragraph position="9"> Zvldently, witha green hat in (4) is related to who wore a meen hat, and the adverb afterwards in (5) can be replaced by fill1 adverbial clauses like after the m1es.t~ left.</Paragraph> <Paragraph position="10"> We are presently testing the validity of this notion of peripheral clause. e use snntence pairs like (6a-b) I (6a) The officer threatened to give the woman * a +icket.b&einternal position of click &quot;*&quot;) (6b) The officer threatened to fine the woman * without B a license. (clavse-final position of click &quot;*&quot;) Y Our goal is to determine, mine a 'click detection paradien, whether or not there is a &quot;clause boundary effect&quot; before the final peripheral clause witho7l-t a License in (6b) * Notice that according to my hypothesis, there is a clause boundary after woman in (6b), but not in (6a) . It has been shown in n humber of studies that clause boundaries (but not phra~e boundaries, in general) have certain measurable behavioral effectn Crf. the review in Focbr, Bever L ~arrett(l974))t so this should apply here too, ff peripheral clauses are indeed psychologically reab clauses 'Now, the last principle-I want to discass is that in understanding an utterance, people mage creative use of their knowledge -about the warId.</Paragraph> <Paragraph position="12"> The cat just caught a --I can immediately guess that the last word was something like bird or DDUW SimlLar-ly, if you say: (8) Put the freezer in the turkey.</Paragraph> <Paragraph position="13"> I know that you really meant &quot;put the turkey in the freezer,&quot; This general point has been made, in one fortn pr another, by many authors. For instance, Winograd~1971) notes that correct understanding of they in *The city councilmen refused to give the women a permit for a demonstration because 3;he-y. feared violezkmm and &quot;The ckky coundlmen refused to give the women a permit for a denaonst6ation because they advocated revalution&quot; needs the &quot;information and reasoning power to realize that city councf lmen are usually staunch advocates of law and order, btlt are hardly likely to be revolutionaries.&quot;(p~ll) Similarly, ~chank(19-72) envisages a theory of natural language underetanding whlch.&quot;has a conceptual base that coneiate of a formal etructurew and &quot;can make predictfons on the basis of this conceptual structurew( ,556) The principal differences between these approaches and m f ne have to do with (1) the of the stored semantic information (PLANNER and &quot;conceptual case networkn representations vsc predicate calculus representations) and (2) the proposed ~ccem ~chan5s~ to thia infobmation. Schank's theory relies on lexLcal decornpoeLtion, while I uae the &quot;meaning postulates&quot; methodr Winograd opts for a broad procedural approach, representing &quot;knowledge in the form of praceuuree rather than tables of rules or lists of pattern~~&quot;(p.21) By contrast, my proposal remaine closer to the traditional WdeclarativeH approach, as will become clear because J. Know someznlng aDout turkeys and freezers. No model excluding the possibility af matching speech against stored knowledge of the world can explain such facts. In this connection, consider also the sentences in (911 (9a) I'm leaving the dbor open so I won't forget to wina &=p (it= the clock-- there was no previous mention of a clock in the dialogue, but the speaker was looking at a Grandfather clock with open door) (9b) They published Wodehouse immediately he came over.</Paragraph> <Paragraph position="14"> (= published books written by Wodehouse) (9c) Italy was sitting in the first row, and France in the second. (= peo~le from Italy and France) (9d) We'd better put in 20 minutes. (= money for 20 minutes -speaking about a parking meter) (9e) He's sitting bv his olate that isn't there.(= by where he wishes* his plate were, by his late in his wish-world epeaking of a cat) These sentences can a11 be understood without difficultyr and the way we understand them is by using our general semantic knowledge.</Paragraph> <Paragraph position="15"> What this means, then, is that the comprehension model needs to incorporate an encyclopedia which somehow codes semantic knowledge.</Paragraph> <Paragraph position="16"> In slm, to be compatible with the psychological model, the al~tomatic sentence recognizer should have the following properties 8 (1) it should be a clause-by-clause processo2, where my notion of &quot;-lause&quot; includes some things traditionally regarded as phrases; as soon as the interpretation of a clause is completed, its syntactic structure is erased; (2) there should be parallel syntactic and semantic processing of each clause1 (3) the recognizer must make systematic use of an encyclopedia which codes knowledge about the world*</Paragraph> </Section> <Section position="6" start_page="30" end_page="30" type="metho"> <SectionTitle> IIt TZADITIONAL LINGUISTIC APPROACHES </SectionTitle> <Paragraph position="0"> Putting together the above observations, one can already see that current linguistic theories are not very helpful for the eolution of our problem. Fnr inetance, linguistic theory would claim that sentence (10) has the syntactic structwe in (11)~ which then undergoes various syntactic transformations until it is finally mapped onto its appropriate semantic s true ture .</Paragraph> <Paragraph position="1"> The rrentence recognizers moat directly meetfng this descriptior are robabl those developed by Stanley Petrick (see Petrick (~969, 197)y) . Wlth some modlf icatione, however, this deecription also fits the theories preeerited in Winograd (1971) and Woode (1973). Whkle these syetema are feature-manipulating rather than transformational, they nonetheless assume that the life-span of syntax extends over an entire sentence, and they make crucial use of integrated syntactic structures for corn lax sentences or instan~e, winograd (1971) presents an f ntegrated syntactic structure for the sentence &quot;Pick up anything green, at least three of the blocker and either a box or a sphere which is bigger than any brick on the table.</Paragraph> <Paragraph position="2"> Recagnlzere ueing an inverse &quot;Geherative Se'lllantics&quot; grammar would aldo fall under this deecriptioh.</Paragraph> <Paragraph position="3"> (10) The man with the beard claimed fiercely that he was innocent.</Paragraph> <Paragraph position="4"> fiercely that h~ ent But in the vLew I have just sketched, sentence (10) never has any integrated syntactic structure like (11). Instead, as shown in (121, the string with the beard, for instance, is processed as a separate clause, and as soon as. its meaning has been extracted and added as qualifier to the preceding noun phrase the man, its syntactic structure is erased.</Paragraph> <Paragraph position="5"> (12) 4 successive &quot;perceptual crrruses&quot; for sentence (10)r [claimed MVB] Lq ma< Similarly, the &quot;Post+ Lause&quot; fiercely and the entire complement clause that he was innocent must be linked to the main clause without referring to the syntactic struc~ure of the latter, which is assumed to be erased as soon as the word claimed has been semantically integrate& This would seem to be a more economical procedure, because it minimizes the size of the syntactic ballast that has to be carried along. C~mparsr for instance, the size of the chunk in (11) to the size 0.4 the little chunks in (12).</Paragraph> <Paragraph position="6"> Secondly, transformational grammar is hardly compatible with the principle of parallel Processins of the eyntax and semantics of a @lausem The reason is that accdrding to transformational grammar, the syntactic analysis precedes and determines the semaqtic analysis. By contrast, Parallel Processing means that at least some of the semantic interpretation rules must be syntax-free.</Paragraph> <Paragraph position="7"> 111 a A THREE-STAGE THEORY OF SEMANTIC ANALYSIS Lez US reT;urn Lor a moment to rlg.l. That rlgure contained a box labelled gyn*actic pracew, and another box labelled semantic proceseoP. As I have stated, these components cannot be identified with the eyntactic and semantic components of current transformational grammar . The syntactic processor will not be discussed in detail here (see ~eimold~forthcoming) for a fuller diacuesiori). It is a predictive parser using d-ency * notation. There are-no syptactic transformations at all, but the output is a simple surface tree for each clbause, with certain nodes marked by functional features like SuBJect, OBJ1, OBJ2, or MainVerB. The trees m (-12) above are exarpples. For the remainder, let me concentrate on the semantic box.</Paragraph> <Paragraph position="8"> I suggest that there are three stapes in the semantic analysis. as shown in FigS, namely a preliminary, intermediate, and f.inal semantic representatton (PSR, ISR, and FSR).</Paragraph> <Paragraph position="9"> The PSIi corresponds to a simple comblnatip~~ of the lexical meaninsrs of the words. Clearly, as we hear the words in a sentence, we immediately grasp their indi4idual meaning, even though we may not be sure yet how they fit together. Thi-s' then f s the preliminary SR.</Paragraph> <Paragraph position="10"> But we als~ immediately have access to some of the immJ,ications of the words and phrases. For instance, if I hear &quot;catw I immediately also know wanlmal.w Adding such implications derives the intermediate SR from -the PSRe The final SR is lake the preliminary one, except that the appropriate semantic roleshave been assigned to all the conetLtuents. An example for the three stages is given in (13). Before translatAmg the structures in (13) into English let me remark on the form of semantic representations.</Paragraph> <Paragraph position="11"> 33: A MODIFIED PREDICATE CALCULUS NOTATION</Paragraph> </Section> <Section position="7" start_page="30" end_page="30" type="metho"> <SectionTitle> $OR SmANTIC REPRESENTATIONS </SectionTitle> <Paragraph position="0"> Bgch semantic representation consists of a number of mef iaeq- and a matrix, where the prewf ixes oorrespona roughly to the noun phrased of the sentence, and the matrix to the main predicate. For easier reference, I have marked this distinction in the text by always enclosing the matrix in square bracke%s * c f &quot; .</Paragraph> <Paragraph position="1"> For instance, in (13) there are three prefixes, and the matrix i& LAUGHVYI.</Paragraph> <Paragraph position="2"> Each prefix consists of a quantifier (e.g., THE, g --which reade &quot;there is at least onew-- or ALL), followed by a variable g &,y,=,&,g --represented by lower case letters The lfnear notatLon used throughout here is an abbreviation defined over dependency stru~tures . For details, see ~e~mol4 (forthcoming), where definitions are also given for tranelating them structures into standard predicate calculus. in the examples), and optionally followed by a backmounded proposition. Backgrounded propositions are the expressions to the rl&t of the colon within the prefixes. For instance, the first prefix in (13) contains the quantifier THE, the variable &@ and a backgrounded pl?oposition BO'lhc, and the entire prefix is readt &quot;The entity x such mat x is &.boy. #I We can now translate the structures 111 (13) into English. The first, Lee, the preliminary SIi, sayst &quot;%e x such that x. is a boy is invowed in somE vent suoh that there is some y and some time which is PAST, and .y is laughing at time t.&quot; Notice that this only asserts that the boy is somehow ,invo&%ed in this, but it does not specify just h.0~. But in order to describe what the ligtener actually understand$ when hearing %he bov lauded, we must of course specify which role the boy plays in this event.</Paragraph> <Paragraph position="3"> Now, looking at the final SH in (13). it can be seen that it is like the PS3, except that it also contains a role pssiment (or $ink, as I will caL1 it), namely x=y:. That is, 5, the boy, playe the role of y, who was the one who did the laughing. By executing this equation x-Y, we can of course simplify the representation, which gives us the last line in (13) The intermediate SR in (13), furthermore, is like the preliminary SR, but in addition contains certain implications of the words. Thus we have: &quot;the x such that x is a boy and (by implication) human and not adult. etc . * And in the matrix of the fSR we get &quot;y laughs at time t and, by implication, y is hman and anima-te and alive-at-time-t.* In other words, one cann.ot laugh unless one is human and alive Vs THE PSR J CONCUENATION RULES Let us return wain to Fig.2. It shows three different blocks of rules which are responsible for deriving the three stages of the semantic analysis, namely, Concatenation Rules, Meaning Rules and Encyclopedic Rules (collectively ~ef erred to as Semantic Knowledge ~ules), and finally Semantic Linking Strategies. They will occupy ue fn this order The Concatenation Rules take the semantic definition of the most recent input word and add it to the current preliminary semantic structure. For instance, (14) lists the semantic definitions (namely-for the, b6v.and laughed) which are relevant for the example in (13) above.</Paragraph> <Paragraph position="4"> 7~ have made the simplifying assumption that there are lower-level components providing the syntactic an& semantic components with a lexically analyzed input string* This, of course, is almost certainly incorrect, and should be refined by making the matching procaea partly top-downi(1n the case of the syntax this has been done to a certain extent, since it is.based on s predictive analyzer. It has not yet been done for the eemantioel but it seems that it can be bui t into the present s stem relative1 easj.1 .) See Nash-Webber 1974) for further</Paragraph> <Paragraph position="6"> drscuseion.eepec ally h s description of the SPEECHLIS system.</Paragraph> <Paragraph position="7"> (14) (a) rthe DDI a (THSV~ --) re-1 (b) ~~OST ~fl I (EX) EBOYXI (c) Elaurrhed MVB PAST], (~'y) (E tr PAST t) L LAUG~~~ 1 Notice that each of the deftnitions consists again of a prefix and a matrix. There are two Concatenation Rules, namely Joining and Ba~kmoundinq~ They are sta-t;ed in abbreviated form in (15) and (~6). and are illustrated in (17). (15) Joir.;_?gc Let (x) tM1l be the current preliminary SR, and (Y) t M2 3 the semantic definition of the last input woPd (which may pot be art of an NP), where (x) and (Y) are the prefixes , and ml'J and tM23 the matrixes Then form here. No+e only that the only syntactic information needed for Concatenhtion is whether or not the input is part of an YP. Othexwiee the semantic definitions of the words ar8 added from left to rF&t, prefixes behind prefixes, and matrix be ind matrf x.</Paragraph> <Paragraph position="8"> VIt THE ISR I SWANTIC KNOWLEDGE RULES Going baqk to Pig.2, the next step was the intermediate SR, which is ilerived from the preliminaPy SR by applying Semantic Knowledge Rules. namely Meaning Rules and Encyclopedic Rules. Meaning Rules deal with strict implicatioli, while 3ncyclopedic Rules are typically probabilistic. For instance, Keanfng 3ules tell us that if somebody is a baker he must also be human and hence animate and hence concrete, etc ; while Encyclopedic Rules tell us that he - tends to wear white clothes, tends to sell bread to people, and similar facts. 9 f had stated earlier that speech involves using one's knowledge about the world. There are two separate problems with this kind ~f semantic knowledge: (1) how to code it! (2) how to retrieve it.</Paragraph> <Paragraph position="9"> Coacerning the finst problem, I assume encyclopedic 8~n the present version of thd theory, the semantic processor accept6 the aemantic definition of a word only if that word has matched the syntactic predictions. It is probably desirahls to liberalise this procedure so as to handle ungrammatical</Paragraph> <Paragraph position="11"> The distinction between M-rule8 and E-rules is akin to Katz& ~odor'~-(1963) distinction between semantic markers and giktinauis~, the main difference being that Kate & Fodor information to be coded essentially in the same form as the semantic representations themselves. This makes it easy to transfer to the encyclopedia informatior received in the cutrent dialogue. Such &quot;active&quot; information would be added continuously to the &quot;situational chapter&quot; of the $ncyclopedia, where this chapter is thought of a3 containing The linguistic and non-linguistic context of the current utterance.</Paragraph> <Paragraph position="12"> Concerning the retrieval problem, it is clear that the information in the encyclopedia must be extracted selectively.</Paragraph> <Paragraph position="13"> The solution chosen here is the one characteristik ~f networks Each Knowledge Rule is given an address, and each lexical entry, as well as each Knowledga Rule, includes pointers to the address of some other relevant Knowledge Rules; Only tnose gules are called up which are associated with the sentence constituents through some pointer.</Paragraph> <Paragraph position="14"> In the case of Meaning 2ules this restriction seems sufficient, because there are only few for each lexi-cal entry.</Paragraph> <Paragraph position="15"> Not so in the case of Encyclopedic Rules. For instance, thebe are all kinds of things I know about bakers --say, that they tend to wear white clothes for work--, but most of which are irrelevant for understanding and verifying the sentence1 (18) The bay was sold a nice cake by-the baker.</Paragraph> <Paragraph position="16"> used features whereas I use meaning postulates@ For practical purposes, the most important aspect 'af the distinction is that M-rlxles are applied obligatorily, while E-rules are applied &electively, according to the intersection technique* (see the discussion below.) Fot: a discussion and critique of the ''featurelln approach, see Weinreich(l966).</Paragraph> <Paragraph position="17"> For instance, the sentence is perfectly true even if the baker happened to be wearing a f lremanB s uniform while selling the cake to the boy.</Paragraph> <Paragraph position="18"> A good way of restricting the number of Knowledge Rules called up for a given sentence seems the Intersection Strategy (cf. ~uillian(1969)) in (19) .</Paragraph> <Paragraph position="19"> (19) Intersection Stratemc If a clause contains two different constituents A and 3 both pointing to the same encJIclopedic rule E, then call Be This rule says to call up only those Encyclopedic iiules which are associated wkth a% least two constituents of the sentence. For instance, in (20) 1 (20) This bread was eo'ld to John by the Italian baker.</Paragraph> <Paragraph position="20"> baker. sell, and bread all point to the same encyclopedic &quot;patternw (21), which states that bakers typically sell baked goods to people4 Hence, the Intersection Strategy Would call up this pattern, which could then be uaed to help interpret the sentence. But the rule specifying that bakers typically wear white clothes for work would not be called up for sentence (20)~ because only the constituent baker in the sentence would point to this particular pattern.</Paragraph> <Paragraph position="21"> VIIt TAE FSAr SmANTIC LINKING ST2ATEGIES Let me return once more to Fig.2. The last~CIle-block in this figure was labelled Semantic Linking Strstegies. They are responsible forrderiving the final Sii, by assigning the appropri&tte semantic roles to various parts of the clauim.</Paragraph> <Paragraph position="22"> There are two aspects to this# clause-internal linking, and clause-to-clause linking. I would like to discuss clause-internal linking first.</Paragraph> <Paragraph position="23"> It seems to me that when we listen to speech, we have a choice of how much attention we pay to syntactic details.</Paragraph> <Paragraph position="24"> This is what Parallel Processing is all about, and it means that thsre are syntax-frse linking strztegies besides syntax-sensitive ones. (22) &ves a synopsis of tba linking strategies. (22) Semantfc Linking Strate~iees 1. Linking by Variable Type 4. Canonical Order Strategy 2. Pattern Matching 5. Syntax-sensitive 2ule 3 . Contradiction Zlimination 6. &quot;Alternative Linking&quot; Types 1, 2, and 3 are syntax-fre8. Type 4, the &quot;Cxnanical Order&quot; stratecy (cf. ~ever(1770)) relies on the shallowest aspect of syntax, namely simple linear order of the major clalrse consitllents. Type 5 is nensi-f;ive to &quot;functional&quot; featllres occurring in the syntactic surface tree. The 6th type, &quot;Alternative Linking Strategies,&quot; will be explained later on; they, too, are pi~rely semantic, Lh-gugh they apply only after the syntax-sensitive rule8 have applied (and failed).</Paragraph> <Paragraph position="25"> Nbw, within this system of linking strategies, there see3 to be different levels of detail. At the shallowest level we have Linking by Variable Type. Pattern icatching requires more semantic detail 1 and Contradfc~tion Elimination is sti 11 more thorough. Furthermore, thp syntax-free strategies rnay be assumed to be simpler than the syntax-sensitive ones, because the latter must keep track of two separate structures namely the semantic and the syntactic structure. I assume that the strategies are ordered according to their relative simplicity, which would give us the order as listed in (22) Furthermore, I assume that once an acceptable reading has been derived for a clause, application of further strategies in the hierarchy becomes o~tiont&. This of course is subject to empirical tests. For instance, Pattern Katching would interpret the sentence: (23) The baker was sold some stale bread by the butcher.</Paragraph> <Paragraph position="26"> incorrectly as &quot;the baker sold the butcher some stale bread. '' If application of further strategies is indeed optional, such sentences should sometimee be misinterpreted.</Paragraph> <Paragraph position="27"> There is in fact some intuitive support for suck a position I 'think most of u% have experienced situations where a Slip of the Yongue like put the freezer in the turkev passed unnoticed atfirst. This ie moat naturally explained by assuming that the syntax never got a chance to apply to the sentence, due to the fact that Pattern Matching resulted in an acceptable reading. Let me now discuss these Linking Strategies in more detail.</Paragraph> <Paragraph position="28"> The first type was Linking by Variable Type, which is stated in (24).</Paragraph> <Paragraph position="29"> (24) Linking by Variable Tne: (1) Link the head-variable of each SR-prefix to the INB-argument of the a~oronriate te provided there is only one eugn appropriate MVB-argument, and is not already linked to some other variahl A in the SR-matrix; (2) if Y is an Bvent-variable .I e then add - e=X, where X is MVB plus its modifiers and arguments; (3) if n is a predicate variable then substitute for F MVB and its arguments.</Paragraph> <Paragraph position="31"> This rule says to link the &quot;head-variable&quot; (Lea, the left-moet variable) of each prefix to the PNB argument which matches its type. Option (2) specifies that event-variables are linked to the event described by the main predicate1 and option (3) deals with certain adverbs like slbwlw or softlv.</Paragraph> <Paragraph position="32"> The rule will be explained usine the example (25).</Paragraph> <Paragraph position="33"> (25) (a) #Yesterday the fath5r of the boy sang + #horribly # in the bath 4.</Paragraph> <Paragraph position="34"> (b) PSR: (THE~~:YSSTE&~~)[THEXI(TH~~:BOY~)(FATHERX~)J</Paragraph> <Paragraph position="36"> Looking at (25b), the PSR for sentence (25a), it can be seen that the variables in the prefixes are of several different ty~es; there are the time-variables 4 and t2, individual-variables IC, y, 2, etc . , a predicate-variable ', and an event-variable The main verb (M3) is SING, and its arguments are a and Kow, time-variables can only be linked to other tkxhe-variables, and these links have the specific form ~EYB (&quot;time-of-tne-YXB is included in tirneof -the-prefix&quot; ) , in our cabe t~ c Cl. Next there ie an individual-variable JC, which matches only the ma-argument hence the link is x=x,m Then there is the predicate-variable F which, according to option (3) of the strategy, is replaced by iWB plus its arguments, yielding the combined reading -H-(SIN G%nally, the event-variable p $s linked to the entire event described by the main predicate, namely that x2 sari(: horribly at time t21 p=tlO8RIBLY (SINGX~~) This exgands the matrix of (25b) into (26) , giving us, after simplification, the final 3R (27) for (25a).</Paragraph> <Paragraph position="37"> The entire procedure is eyntax-free, with one qualificationt the constituent correepond1ng to the pain verb is marked in the semantic repreeentation by a oonresponding feature That is. SING in (25b) is marked as WVlulVB This syntactic feature is immediately copied from the syntactic structure into the semantic one$ where it is preserved until the entire sentence has been processed.</Paragraph> <Paragraph position="38"> For many constituent types, Linking by Variable Type is the only Linking Strategy that is needed. This is true particl~larly for adverbs, temporal phrases like last Octbber , PrepP like into the garden, and auxiliary verbs like will. In addition, if a, clause has only one nuclear noun phrase, as 2s the case in sentence (25a), then the entire interpretation fs normally taken care of by this strategy# The next strategy was Pattern M*tching, which is stated in (28). It says that if certain clause constituents match a pattern, then they are linked as in the pattern.</Paragraph> <Paragraph position="39"> pattern called up by a PSR whose MVB is B~~~(v~~ 'V n )t (2 ) then for each S2-pref ix head-variable ui matching exactly cne desc~#iption A ov! in the pattern, add</Paragraph> </Section> class="xml-element"></Paper>