File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/91/w91-0215_metho.xml
Size: 38,311 bytes
Last Modified: 2025-10-06 14:12:47
<?xml version="1.0" standalone="yes"?> <Paper uid="W91-0215"> <Title>A model for the interaction of lexical and non-lexical knowledge in the determination of word meaning</Title> <Section position="4" start_page="165" end_page="168" type="metho"> <SectionTitle> 2 The variability of words </SectionTitle> <Paragraph position="0"> The way in which a word might contribute to the determination of the meaning of linguistic expressions is indeterminate in different ways. W. Labov calls the semantic potential of words enabling them to constitute various links between linguistic expressions and elements of the domain (semantic) variability. It is this potential which is responsible for the already mentioned context dependence of word meaning. An important goal of our model is to classify types of variability according to a set of more or less specific properties. The questions guiding this classification are: Which kind of representation does the variability affect; by which means are the variants related and, how can the referential potential be restricted in order to single out the intended meaning? In the linguistic tradition, these variability phenomena fall into one of four classes which we will sketch out below.</Paragraph> <Section position="1" start_page="165" end_page="166" type="sub_section"> <SectionTitle> 2.1 Morphological ambiguity </SectionTitle> <Paragraph position="0"> This class represents cases of identical surface representations of words extracted from a discourse. Depending on the kind of representation in which the natural language input is encoded (i.e. orthographic, phonetic, ... form) cases of homography or homophony may belong to this class or not. Homonymy as a specific instance of morphological ambiguity results from the identity of lexical base forms. In a lexicon using orthographic representations as is normally the case in dictionaries (singular nouns, infinitive forms of verbs, ... ) there are for example two homonymous entries for &quot;firm&quot; (the adjective and the noun variant). In addition, morphological ambiguity captures the more general case of an inflected form that is identical to a base form or to another inflected form. An example is the occurence of &quot;saw&quot; which depending on the context can be understood as noun, as base form of the verb &quot;saw&quot; or as inflected form derived from the base form &quot;see&quot;.</Paragraph> <Paragraph position="1"> It is a notorious problem in lexical semantics \[Kooij 71\] to justify the distinction between coincidental identity of forms (morphologicM ambiguity) and semantic variants of a single lexical item (chapter 2.2). In our approach it depends on the purpose of the system and, thus, is a design decision comparable to modelling conventions for the domain knowledge. Identical basic word forms which are in no relevant and transparent way related to the representation of knowledge about the domain will be represented by different lexical items. The question whether two identical forms 'collapse' into a single entry is then directed by the choice of the domain and the task analogous to the way in which drawing a boundary between elements of the domain is motivated. Defining two word forms as being homonymous yields the consequence that once the occurence in a discourse has been morphologically identified and thus mapped onto the corresponding lexical item it is no more possible to skip to a different homonymous variant. Words which shall not expose this behaviour should not be modelled as homonymous hut as semantic variants of a single lexical item.</Paragraph> </Section> <Section position="2" start_page="166" end_page="166" type="sub_section"> <SectionTitle> 2.2 Polysemy and polyfunctionality </SectionTitle> <Paragraph position="0"> In the preceding chapter we outlined the situation where two lexical items realize the same word form. The potential of semantic variation encoded in a single lexical item is known as polysemy. It remains in effect once the appropriate lexical item has been identified by means of morphological processes. A special case of polysemy is what \[Weber 74\] calls polyfuaclionalily refering to the situation where two variants of a lexical item belong to different syntactic categories. Polyfunctionality occurs very frequently since many lexical items allow identical realizations which belong to different syntactic categories being related by means of conversion or other morphological processes which do not affect the word stem. Examples are the nominal and verbal reading of &quot;point&quot; and the variants of &quot;clean&quot; which are categorized as adjective, adverb and verb. The comparison with variants in a dictionary is not as effective as it was in chapter 2.1 because the entries in a dictionary tend to conflate phenomena we call polysemy and cases of variability outlined in chapter 2.3.</Paragraph> </Section> <Section position="3" start_page="166" end_page="167" type="sub_section"> <SectionTitle> 2.3 Metonymy and change of semantic type </SectionTitle> <Paragraph position="0"> In the previous chapter we mentioned that a polyfunctional expression can be the analyzed as the realization of different categories in different contexts. The difference in semantic potential that arises from this variability sometimes not only involves the transition to a different semantic type in the sense of Montague grammar but it may be paralleled by a more or less extensive shift in conceptual interpretation (cf. the one-place versus two-place predicate reading of &quot;drink&quot;). Apart from ambiguities which are reflected by morphosyntactic properties of a lexical item (eg. its argument structure) there are instances of semantic variation allowing to change the interpretation of a class of linguistic items in a systematic manner. The crucial question is if this class should be characterized by lexical information or on the basis of regularities found in the domain. Metonymy is a specific instance of this type of variability where the different readings are related by elements of a set of fundamental relationships such as 'part-whole', 'cause-effect', etc.</Paragraph> <Paragraph position="1"> \[Nunberg 78\] investigates more general mechanisms that licence the use of a word in place of another in cases where both of them are related by means of a context-specific relation.</Paragraph> <Paragraph position="2"> The phenomena range from eases of systematic correspondence such as in the 'newspaperexample '1 to more ideosyncratic ones as the famous 'ham-sandwhich-case '2. As Nunberg notes these are not cases of linguistic ambiguity because pointing to the sandwhich would serve the same purpose as the utterance of the complex phrase. Nevertheless, it is not clear if the relations are considered as instances of lexical or encyclopaedic knowledge.</Paragraph> <Paragraph position="3"> An example similar to the 'newspaper-case' is the potential of words such as &quot;school&quot;, &quot;opera&quot;, ... to select one of the alternative meaning variants 'building', 'process', 'institution', etc. The approach outlined in \[Bierwisch 82\] derives this potential from systematic relationships between concepts representing entities of the domain. The conceptual knowledge about social instutions has to provide the background information that &quot;the parliament is at the end of the street&quot; is semantically well-formed though &quot;?the government is at the end of the street&quot; is not. Since the iexical specifications of &quot;parliament&quot; and &quot;government&quot; cannot account for the fact that it is naturally assumed that the former can be associated with a specific building whereas a similar assignment is not possible for the latter. A comparable argument may be found for the 'substance'-reading of words naming trees. The iil-formedness of &quot;?this table is made of plane&quot; in contrast to &quot;this table is made of oak&quot; results from the non-verifiability in the common-sense model of the domain. If an expert would affirm that is quite common to use the wood from palm trees for the construction of tables we would probably change our model of the domain and licence the acceptability of the first sentence. This example is different from the 'school'/'newspaper'-cases since in addition to the conceptual shift it involves a modification of semantic properties of the underlying lexical items. This in turn is an argument in favour of a lexicalist position which would classify this type of variability as cases of polysemy. \[Pustejovsky 90\] shows that a lexicalist approach to event structure allows to systematically characterize a whole range of type-shifting phenomena together with their consequences with respect to well-formedness conditions.</Paragraph> <Paragraph position="4"> As a consequence of seeking the portability of domain specific knowledge we follow the lines of Bierwisch in distinguishing lexical and conceptual information. Yet we do not reject the lexicalist position since we consider semantic type-shifting effects as driven by regularities of the conceptual structure. In the following section, we generalize this position to a systematic distinction between linguistic and non.lingustic knowledge. This allows us to keep variants introduced by linguistic ambiguities systematically apart from those cases we classify as non-linguistic variations.</Paragraph> <Paragraph position="5"> Following \[Binnick 70\] we define polysemous variants of a word as those cases of variability which are (at least in principle) distinguishable on the basis of linguistic properties of the corresonding lexical item. Polysemous variants of a lexical item thus differ in at least one morphosyntactie or semantic property. For non-linguistic variants introduced by means of metonymy or type-shifting linguistic properties of a lexical item do not help to identify the intended reading because the variants have exactly the same linguistic properties. This situation calls for the disambiguation potential of contextual information in order to reduce the 'semantic scope '3.</Paragraph> </Section> <Section position="4" start_page="167" end_page="168" type="sub_section"> <SectionTitle> 2.4 Contextual relativity </SectionTitle> <Paragraph position="0"> Vagueness and indexicality also belong to the class of variability phenomena. They are usually associated with specific groups of linguistic expressions (graduable predicates in ease of vagueness and deietie expressions in case of indexieality). In contrast to the effects introduced so far, in these eases the class of potential referents cannot simply 3We consider the 'semantic scope' of a word as the possible range of interpretation implied by the literal use of a word. The more general notion of 'referential potential' additionally accounts for cases of conceptual shift as in the 'ham-sandwich' example.</Paragraph> <Paragraph position="1"> be characterized by an enumeration of alternatives. As \[Pinkal 80\] points out it is an inherent property of vague predicates to provide a 'grey area' where the decision whether the predicate is applicable or not depends on the discourse context. It is even impossible to precisely delimit the area of positive or negative applicability. Following the lines of \[Bosch 83\] we do not consider vagueness as an isolated semantic property of a specific class of words but as an instance of the more general notion of context-dependence 4. Since this is an aspect of the referential potential of words our model has to cover theses phenomena as well.</Paragraph> <Paragraph position="2"> Indexicality is the potential of deictic expressions to select their meaning by exploiting peculiarities of the discourse situation. It is similar to vagueness since the implied referential indeterminacy cannot be resolved independently from the specific discourse context. Yet, even deictie expressions are not immune to other types of variability. For example, the pronoun 'T' may be used to refer to an entity which is somehow related to the speaker in a certain discourse situation. An example is the utterance of &quot;I am over there&quot; with the speaker pointing to a desk. The expression 'T' in this case can be used to refer to the place, where the speaker usually works. This is an example of a systematic shift in meaning motivated by a conceptual relation. It thus belongs to the phenomena described in the previous chapter.</Paragraph> <Paragraph position="3"> Another instance of context relativity occurs in cases of privative opposition. This relativity results from the lack of semantic information for a specific word which could be provided by the use of a different word. According to \[Zwicky/Sadock 75\] &quot;dog&quot; is ambiguous between the readings 'male dog' and 'female dog' because it can be forced to provide both readings in sentences like &quot;that is a dog, but it isn't a dog&quot;. In contrast to &quot;?that is a lion, but it isn't a lion&quot; it seems that a meaningful interpretation can be found for the former (the one which forces the selection of different variants for both occurences of &quot;dog&quot;) whereas the latter leads to a contradiction. The choice of a variant could be forced by the use of &quot;bitch&quot; instead of dog which is not possible for &quot;lion&quot; since there is not regular lexical specification for something like &quot;lioness&quot;. This illustrates the fact that lack of semantic information for a lexical item can under certain circumstances yield the same effect as a disjunction of alternative readings. This may occur whenever the semantic 'gap' can be filled by one of a small set of possible alternatives.</Paragraph> </Section> </Section> <Section position="5" start_page="168" end_page="170" type="metho"> <SectionTitle> 3 A classification of knowledge types </SectionTitle> <Paragraph position="0"> In order to have a precise representational basis for our model of word meaning, this chapter is intended to introduce the basic notions used to classify the relevant phenomena. The distinction between linguistic and non-linguistic knowledge mentioned in the preceding section constitutes the methodical basis of our model. Up to a certain degree, this distinction allows an independent examination of properties characteristic for only one type of knowledge. By assuming this distinction we do not claim that linguistic and non-linguistic knowledge are in a way fundamentally different. We use this distinction as a methodological tool that makes it possible to isolate certain aspects of word meaning not directly involving the whole range of both types of knowledge. In the course of stepwise extending the complexity of interrelations between linguistic and non-linguistic knowledge we will have to carefully analyze the tenability of this distinction.</Paragraph> <Paragraph position="1"> 4 In general the notion of context dependence applies to referential expressions such as definite nominal phrases. We will restrict our attention to cases of context-dependence which apply to single words. We account for possible similarities between both types of knowledge by using the same formalism for the representation of linguistic and non-linguistic knowledge. It is a variant of order-sorted predicate logic \[Nebel/Smolka 89\] which combines properties of the KL-ONE family of knowledge representation languages with properties of feature based unification grammars such as HPSG \[Pollard/Sag 87\]. In order to concentrate on the description of our model we will not go into the details of our formalism here s.</Paragraph> <Paragraph position="2"> The central components of our model are the iezicon on the linguistic side and the ontology on the non-linguistic side. The lexicon and the ontology provide the 'basic building blocks' of linguistic and non-linguistic knowledge respectively. The elements of the lexicon are called categories; the elements of the ontology concepts. It is important to note that the lexicon in our model integrates specifications of syntactic categories (i.e N, V, ..., N', ..., AP .... ) with lexical items.</Paragraph> <Paragraph position="3"> The formal means for the description of categories and concepts are sorts which are related by means of attributes and rules. Attributes may be used to express characteristic properties or relationships motivating the choice of a specific distinction between sorts.</Paragraph> <Paragraph position="4"> Rules on the other hand are not considered as tools for the description of inherent and permanent properties but as representations of regularities which might arise under certain circumstances. Apart from that, the collection of attributive characterizations has to be consistent. That is not necessarily required for the system of rules as a whole. The fundamental organizational principle of subsumption relates categories and concepts are licencing the inheritance of attributes between sorts. The subsumption order does only apply between elements of one and the same type of knowledge. The notation used for the description of sorts is a feature-logic as in \[Shieber 86\] for categories and a simple relational notation for concepts. We represent the fact that A subsumes B as A C B.</Paragraph> <Paragraph position="5"> Rules of grammar and rules of inference are represented by using simple predicate logic notion. Sorts, attributes and rules will be called knowledge elements. All of them put together constitute the knowledge base of our system.</Paragraph> <Paragraph position="6"> According to our argument in favour of a design strategy specifically tailored to the domain and the task the system is intended for, the structure of the ontology must be covered by an appropriate theory about entities of the domain, their inherent properties and the diverse aspects in which they are related. We reiterate this methodological claim here since a similar argument can be applied to the organization of the lexicon. The typical task of a text understanding system is to facilitate the analysis and production of textual input. It depends on the capabilities required whether certain aspects of this task involve restrictions on the set of relevant linguistic phenomena 6. The choice of lexical items depends on the domain at issue since the lexical inventory should cover at least the range of non-linguistic phenomena represented in the non-linguistic component of the system.</Paragraph> <Paragraph position="7"> Categorial knowledge provided by the lexicon together with the rules off grammar constitutes the descriptional apparatus for the classification of expressions. On the one hand expressions serve as input for linguistic processing and on the other hand they represent sequential patterns of written or spoken language. This intermediate status makes them 5 Most of our assumptions about the representation of linguistic and non-linguistic knowledge are based on experiences gained from work in the LILOG-project at IBM Stuttgart. A description of formalisms and methods applied in this project can be found in \[Geurts 90\].</Paragraph> <Paragraph position="8"> 6 One can for example reduce the computational complexity by limiting the relevant sentence-level constructions to simple propositional clauses if the system is not meant to deal with other types of modality. Even if this argument sounds quite trivial the determination of a set of requirements for the linguistic component are as important as they are for the representation of domain knowledge. elements of discourse knowledge. Expressions belong to the type of knowledge which serves as a kind of record for the registration of linguistic interactions together with their spatio-temporal specifications. It directly corresponds to episodic knowledge on the non-linguistic side. Episodic knowledge has the same intermeditate status as discourse knowledge since on the one hand it serves as a record of 'statements' and other 'experiences' with respect to the domain and on the other hand it is used to characterize entities from the domain as individuals on the basis of conceptual knowledge. Conceptual knowledge combines the structural information conveyed by the ontology with the additional information expressed by rules of inference. Individuals which result from the processing of certain linguistic expressions are called referents since they are open to further reference by linguistic means. The figure below shows the whole classification assumed as the basis of our model.</Paragraph> </Section> <Section position="6" start_page="170" end_page="173" type="metho"> <SectionTitle> 4 Word meaning </SectionTitle> <Paragraph position="0"> The task of our model is an approach to the various aspects of word meaning which are responsible for the variability effects described in section 2. In order to reduce the complexity of the linguistic domain we restrict the relevant linguistic expressions to those representing simple word forms which cannot be further decomposed by mophological processes other than inflection. As a consequence, the granularity for the representation of linguistic knowledge treats basic morphemes as minimal elements. Another consequence is the width of the temporal grid which specifies the minimal 'temporal distance' between elements of discourse knowledge. We introduce temporal indices, allowing to subdivide the linguistic input into a chain of word-level segments. Each index uniquely identifies a gap between two words and directly corresponds to a set of intermediate results in the course of processing the input. These results are what we call the context. Formally speaking, a context is a set of factors marked by a temporal index specific to a certain stage of processing at which the system is observed. Factors are functions between knowledge elements or between instances thereof. They are elements of specific contexts and therefore differ from attributes because their existence is strictly tied to a certain stage of processing associated with a temporal index. As a result of iterated forwarding a factor may remain applicable during a sequence of processing steps. Factors may be classified according to their origin. Factors which are directly derived from knowledge elements are called primitive factors. Depending on the type of knowledge elements involved we distinguish two modes of origin. Primitive factors can be ...</Paragraph> <Paragraph position="1"> * selected as instances of attributes or * established by the application of rules.</Paragraph> <Paragraph position="2"> Complex factors are derived from primitive ones by one of the following operations: * the restriction of the domain and/or range of a primitve factor * the application of set-theoretical operations on primitive factors * the functional composition of primitive factors We present this classification because our analysis of word meaning crucially depends on the notion of contextual factors. It is the main goal of our project to reconstruct word meaning as the result of the interaction of processes with cope with an effective integration of various linguistic and non-linguistic factors primitive and complex in nature. Since we investigate word meaning under the aspect of the potential of words to refer to representations of entities of the domain, word meaning in our terminology is a complex factor which links elements from discourse knowledge (expressions representing words) to elements from episodical knowledge (referents). The temporary status of factors is responsible for the fact that for the identification of the referential meaning of a word the whole context has to be taken into account. The linguistic notion of 'word meaning' therefore derives from the analysis of subsets of factors that result from the intersection of contexts present in a sufficiently large group of different uses of the same word.</Paragraph> <Section position="1" start_page="171" end_page="173" type="sub_section"> <SectionTitle> 4.1 The constituents of reference </SectionTitle> <Paragraph position="0"> In order to characterize the interrelation between factors introducing variability effects (productive factors) and those limiting the 'semantic search space' (restrictive factors) we need to examine the way in which word meaning can be decomposed into a small number of factors 7. A segmentation of the interpretation process according to our classification of knowledge types leads to three components which by application of functional composition constitute word meaning. Since components are derived by functional decomposition of a complex factor (word meaning) they are factors as well. Components may be further analyzed as the results of set-theoretic operations on basic factors some of which limit and some of which extend the 'semantic scope' of a word form s. Factors extending the scope of interpretation are directly responsible for the variability effects described in section 2.</Paragraph> <Paragraph position="1"> Factors constraining the scope of interpretation are the topic of chapter 4.2. The following list introduces the three components of meaning together with examples of the relevant productive factors. An interesting criterion for the classification of productive factors is whether they are established by the application of rules or selected from attributes between knowledge elements.</Paragraph> <Paragraph position="2"> 7We do not assume contexts to be finite but our approach relies on the fact that a finite subset of the context sumces to describe word meaning precisely enough to demonstrate the requirements a system with reasonable disambiguation capabilities has to fulfil.</Paragraph> <Paragraph position="3"> 8 In fact the same function may in one stage of the interpretatio process serve as productive factor and in another as a restrictive factor. Thus, the property of productivity or restrictivity cannot definitively assigned to specific factors. More precisely speaking, it is property of factors dependent on the current stage of processing represented by the temporal index.</Paragraph> <Paragraph position="4"> (1) Categorization Categorization as the first component of the chain maps expressions representing words onto lexical items. Its relevant productive factor is established by the application of morphological rules in some cases involving morphological ambiguity.</Paragraph> <Paragraph position="5"> The following categorization of &quot;saw&quot; is selected from the lexicon because of the identity of phonological forms: The semantic specification of a lexical item and the properties of the corresponding concept are related by means of the sEi value. Two basic factors which contribute to the relevant productive factor of lexical meaning originate from attributes in the knowledge base by means of selection. The linguistic constituent of lexical meaning may involve polysemy or polyfunctionality if it provides a range of semantic alternatives and the corresponding morphosyntactic properties for a single lexical item.</Paragraph> <Paragraph position="6"> (2a) The linguistic constituent of lexical meaning The subcategorization entry of the lexical item selects the following three s polysemous readings for cat14: The non-linguistic constituent of iexical meaning is responsible for variabilities originating from systematic relationships between different concepts related to a single lexical 9As a matter of illustration the subcategorization frame does not exhaust the range of alternative readings. It again depends on the task of the linguistic component wether the lexlcal item has to provide further polysemous variants such as the intransitive reading of &quot;see&quot;.</Paragraph> </Section> </Section> <Section position="7" start_page="173" end_page="174" type="metho"> <SectionTitle> (3) Indlvlduation </SectionTitle> <Paragraph position="0"> spatio-temporal properties The filler of the actor role visually perceives the filler of the theme role by using the filler of the instrument role The filler of the actor role realizes the filler of the proposition role SITUATIO| conc2 The filler of the visitor role conc 4 . . .</Paragraph> <Paragraph position="1"> This last factor in the chain of meaning components becomes established by the application of rules of inference. It maps concepts onto referents. The productive factor of individuation is referentiality extending the range of possible referents a concept can be individuated to. Referentiality here serves as cover term for the phenomena described in chapter 2.4 together with cases of conceptual variation which qualify as 'ad-hoc-anaphora' because they succeed to identify a unique referent in a specific context but cannot be characterized as instances of general principles guiding a shift in conceptual interpretation ldeg. Additional parameters of the discourse situation (the time and location of the utterance as well as a proposition rl) allow to establish an individuation which maps conc34 onto a referent r2 with the following properties:</Paragraph> <Paragraph position="3"> The three components of word meaning can be considered intermediate steps of the interpretation process. They may be analyzed and described in isolation since their interaction results from the way in which the range of the preceding component fits to the domain of the following. The task of the interpretation process on this background is to find a 'path' leading from an expression to an individual which under consideration of all the available contextual factors qualifies as plausible candidate for the referential meaning of the expression.</Paragraph> <Section position="1" start_page="173" end_page="174" type="sub_section"> <SectionTitle> 4.2 How the semantic scope can be restricted </SectionTitle> <Paragraph position="0"> The crucial question now is how the diverse components interact in order to reduce the range of word meaning by the exclusion of implausible variants. Here we pick out three ldegNunberg's ham-sandwich is an example instance of this kind of context specific ad-hoc-anaphora.</Paragraph> <Paragraph position="1"> example groups of factors which in a typical situation may support the reductive factors of meaning components and such help to reduce the referential potential of a word.</Paragraph> </Section> </Section> <Section position="8" start_page="174" end_page="176" type="metho"> <SectionTitle> (1) Word-specific factors </SectionTitle> <Paragraph position="0"> The first group are factors which result from structural relationships expressed by morphosyntaxtic attributes and rules of grammar. As we mentioned in chapter 2.3 these factors only Mfect variabilities which are introduced by the linguistic part of our knowledge base. Factors of this group thus may help to resolve cases of morphological ambiguity, polysemy or polyfunctionality but they have no effect on variants that result from metonymy, change of semantic type or other instances of contextual relativity.</Paragraph> <Paragraph position="1"> Consider the following part of discourse: &quot;I tried to find a possibility to escape. Then I saw a hole in the fence.&quot; We'll give a sketch of an analysis of the meaning of &quot;saw&quot; in this example on the basis of the knowledge elements introduced in the previous section. The rules of grammar suppress the nominal reading of &quot;saw&quot; since the principles of X-syntax require the constituent &quot;a hole in the wall&quot; to be 'absorbed'. Morphological rules do not support the disambiguation process. On the contrary, their productive potential causes the introduction of the variant derived from the base form &quot;see&quot;. The variant cat56 is ruled out because of incompatibilities between its subcategorization frame and the syntactic environment of &quot;saw&quot; in our example.</Paragraph> <Paragraph position="2"> The intransitive polysemous variant fails because of the same reasons as the nominal homonymous variant. The transitive reading of cat56 would force an optional prepositional argument to be headed by &quot;into &quot;11. Thus, the polysemous variant conc42 can be singled out purely on the basis of word-specific factors if the rules of grammar do not account for the adjunction of a locative PP with the head &quot;in&quot;. In case the grammar licences the existence of a prepositional adjunct, our model of the domain would have to contribute the restrictive factor that the concept associated with &quot;the fence&quot; does not fit with conditions on 'sawing'-events. Since the reductive influence of word-specific factors ends with the selection of polysemous variants we cannot expect a further restriction of the 'semantic scope' without additionally considering other types of factors.</Paragraph> <Paragraph position="3"> 11 In order to simplify the example this alternative does not occur in the feature structure of cat56. (2) Selectional restrictions A different group of factors belongs to the semantic level 12 of our model. Factors in this group neither are clear instances of linguistic regularities nor of non-linguistic ones. They are partially linguistic and partially non-linguistic in nature and therefore considered as complex factors derived by the integration of elements from both types of knowledge.</Paragraph> <Paragraph position="4"> They may help to reduce variabilities affecting categorization or lexical meaning. Yet, like word-secific factors they do not constrain contextual relativity.</Paragraph> <Paragraph position="5"> Consider the following part of discourse: a piece of wood } &quot;I saw in the bathroom.&quot; a cup of coffee The semantic specification for the internal argument of &quot;saw&quot; leads to a concept conc4~ representing a class of entities which qualify as fillers of the corresponding role in the conceptual representation of the 'sawing'-event.</Paragraph> <Paragraph position="7"> for example: a human beeing for example: a concrete object for example: a set of tools The compatibility between the semantic specification of the internal argument of the two polysemous readings of &quot;saw&quot; and the type specification of a role belonging to the correspondint SEM-value account for the existence of selectional restrictions. The conceptual representation of '% piece of wood&quot; must be compatible with conc42 in order to establish the lexical meaning leading to the concept SaWI\]IG.EVE\]IT. In the case of &quot;this hand-saw saws well&quot; the external argument would because of requirements on fillers of conceptual roles have to be mapped on the instruemt role of conc75. The situation is more tricky if we compare the instances of the external argument of cat23 in the following example: (1) The policeman saw an accident.</Paragraph> <Paragraph position="8"> (2) *The ball saw an accident.</Paragraph> <Paragraph position="9"> (3) The automatic traffic control camera saw an accident.</Paragraph> <Paragraph position="10"> (4) ?The morning saw an accident.</Paragraph> <Paragraph position="11"> An interesting aspect of this phenomena is that selectional restrictions may be cancelled by contextual factors or by means of rhetoric devices. The example sentences show how difficult it might be to identify an obligatory set of selectional restrictions. Comparing (1) with (2) suggests being an instance of the concept PERS0\]I as a reasonable choice. Example (3) however shows that the critical property is something like 'having an optical sensoring mechanism capable of detecting objects'. Sentence (4) might imply a metaphorical interpretation in spite of its apparent semantic illformedness. This again yields an argument in favour of a domain-driven design strategy for the semantic level linking between categories and concepts.</Paragraph> <Paragraph position="12"> 12 The semantic level is the 'interface' between linguistic and non-linguistic knowledge represented by the SEM values in |exical items together with a set of rules which attune semantic specifications of argument structure to attributes of the corresponding conceptual definitions.</Paragraph> <Paragraph position="13"> (3) The set of possible referents The last group of factors exemplified here are the only means available to reduce the semantic scope resulting from contextual relativity. As we saw in chapter 2.4 contextual relativity is a fundamental property all referential expressions have in common. Since the 'semantic scope' introduced by variabilities of this type cannot be subdivided into a set of alternative readings neither lexical nor conceptual information does help to restrict the range of indiviuation.</Paragraph> <Paragraph position="14"> The only way out of this dilemma is to derive a set of possible referents from knowledge about the domain and from information occuring in the preceding part of discourse. The latter calls for an investigation of discourse properties on the basis of pragmatic devices such as the Gricean conversation principles. Bridging phenomena 13 as generalizations of anaphoric binding are promising candidates for an approach to the determination of possible referents. C. Sidner emphasizes: &quot;anaphor interpretation can be studied as a computational process that uses the already existing specification of a noun phrase to find the specification of an anaphor&quot; \[Sidner 83, p.269\]. The actual limits of a set of possible referents thus very much depend on the inferential capabilities of our system to reconstruct the conceptual relationships undelying text coherence. The notion of focus presented in the work of Sidner certainly plays a crucial role in the reducion of the computational complexity a computation of all possible bindings would involve if realistic discourse situations were to be considered.</Paragraph> </Section> class="xml-element"></Paper>