File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/98/p98-2216_metho.xml
Size: 18,161 bytes
Last Modified: 2025-10-06 14:15:00
<?xml version="1.0" standalone="yes"?> <Paper uid="P98-2216"> <Title>The Computational Lexical Semantics of Syntagmatic Relations</Title> <Section position="4" start_page="0" end_page="1328" type="metho"> <SectionTitle> 2 Approaches to Syntagmatic Relations </SectionTitle> <Paragraph position="0"> Syntagmatic relations, also known as collocations, are used differently by lexicographers, linguists and statisticians denoting almost similar but not identical classes of expressions.</Paragraph> <Paragraph position="1"> The traditional approach to collocations has been lexicographic. Here dictionaries provide information about what is unpredictable or idiosyncratic. Benson (1989) synthesizes Hausmann's studies on collocations, calling expressions such as commit murder, compile a dictionary, inflict a wound, etc. &quot;fixed combinations, recurrent combinations&quot; or &quot;collocations&quot;. In Hausmann's terms (1979) a collocation is composed of two elements, a base (&quot;Basis&quot;) and a collocate (&quot;Kollokator&quot;); the base is semantically autonomous whereas the collocate cannot be semantically interpreted in isolation. In other words, the set of lexical collocates which can combine with a given basis is not predictable and therefore collocations must be listed in dictionaries.</Paragraph> <Paragraph position="2"> It is hard to say that there has been a real focus on collocations from a linguistic perspective. The lexicon has been broadly sacrificed by both English-speaking schools and continental European schools.</Paragraph> <Paragraph position="3"> The scientific agenda of the former has been largely dominated by syntactic issues until recently, whereas the latter was more concerned with pragmatic aspects of natural languages. The focus has been on grammatical collocations such as adapt to, aim at, look \]or. Lakoff (1970) distinguishes a class of expressions which cannot undergo certain operations, such as nominalization, causativization: the problem is hard; *the hardness of the problem; *the problem hardened. The restriction on the application of certain syntactic operations can help define collocations such as hard problem, for example. Mel'~uk's treatment of collocations will be detailed below.</Paragraph> <Paragraph position="4"> In recent years, there has been a resurgence of statistical approaches applied to the study of natural languages. Sinclair (1991) states that '% word which occurs in close proximity to a word under investigation is called a collocate of it .... Collocation is the occurrence of two or more words within a short space of each other in a text&quot;. The problem is that with such a definition of collocations, even when improved, z one identifies not only collocations but free-combining pairs frequently appearing together such as lawyer-client; doctor-hospital. However, nowadays, researchers seem to agree that combining statistic with symbolic approaches lead to quantifiable improvements (Klavans and Resnik, 1996).</Paragraph> <Paragraph position="5"> The Meaning Text Theory Approach The Meaning Text Theory (MTT) is a generator-oriented lexical grammatical formalism. Lexical knowledge is encoded in an entry of the Explanatory Combinatorial Dictionary (ECD), each entry being divided into three zones: the semantic zone (a semantic network representing the meaning of the entry in terms of more primitive words), the syntactic zone (the grammatical properties of the entry) and the lexical combinatorics zone (containing the values of the Lexical Functions (LFs) 3). LFs are central to the study of collocations: A lexical function F is a correspondence which associates a lexical item L, called the key word of F, with a set of lexical items F(L)-the value of F. (Mel'6uk, 1988) 4 We focus here on syntagmatic LFs describing co-occurrence relations such as pay attention, legitimate complaint; from a distance. 5 Heylen et al. (1993) have worked out some cases which help license a starting point for assigning LFs.</Paragraph> <Paragraph position="6"> They distinguish four types of syntagmatic LFs:</Paragraph> <Paragraph position="8"> The MTT approach is very interesting as it provides a model of production well suited for generation with its different strata and also a lot of lexical-semantic information. It seems nevertheless that all lexicographic approach of Mel'tuk and Zolkovsky has been applied among other languages to Russian, French, German and English.</Paragraph> <Paragraph position="9"> the collocational information is listed in a static way. We believe that one of the main drawbacks of the approach is the lack of any predictable calculi on the possible expressions which can collocate with each other semantically.</Paragraph> </Section> <Section position="5" start_page="1328" end_page="1331" type="metho"> <SectionTitle> 3 The Computational Lexical </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="1328" end_page="1330" type="sub_section"> <SectionTitle> Semantic Approach </SectionTitle> <Paragraph position="0"> In order to account for the continuum we find in natural languages, we argue for a continuum perspective, spanning the range from free-combining words to idioms, with semantic collocations and idiosyncrasies in between as defined in (Viegas and Bouillon, 1994): * free-combining words (the girl ate candies) * semantic collocations (fast car; long book) 6 * idiosyncrasies (large coke; green jealousy) * idioms (to kick the (proverbial) bucket) Formally, we go from a purely compositional approach in &quot;free-combining words&quot; to a non-compositional approach in idioms. In between, a (semi-)compositional approach is still possible. (Viegas and Bouillon, 1994) showed that we can reduce the set of what are conventionally considered as idiosyncrasies by differentiating &quot;true&quot; idiosyncrasies (difficult to derive or calculate) from expressions which have well-defined calculi, being compositional in nature, and that have been called semantic collocations. In this paper, we further distinguish their idiosyncrasies into: * restricted semantic co-occurrence, where the meaning of the co-occurrence is semi-compositional between the base and the collocate (strong coffee, pay attention, heavy smoker, ...) * restricted lexical co-occurrence, where the meaning of the collocate is compositional but has a lexical idiosyncratic behavior (lecture ...</Paragraph> <Paragraph position="1"> student; rancid butter; sour milk).</Paragraph> <Paragraph position="2"> We provide below examples of restricted semantic co-occurrences in (1), and restricted lexical co-occurrences in (2).</Paragraph> <Paragraph position="3"> Restricted semantic co-occurrence The semantics of the combination of the entries is semicompositional. In other words, there is an entry in &quot; the lexicon for the base, (the semantic collocate is encoded inside the base), whereas we cannot directly refer to the sense of the semantic collocate in the lexicon, as it is not part of its senses. We assign the co-occurrence a new semi-compositional sense, where the sense of the base is composed with a new sense for the collocate.</Paragraph> <Paragraph position="4"> In examples (1), the LSFs (LSFIntensity, LS-FOper, ...) are equivalent (and some identical) to the LFs provided in the ECD. The notion of LSF is the same as that of LFs. However, LSFs and LFs are different in two ways: i) conceptually, LSFs are organized into an inheritance hierarchy; ii) formally, they are rules, and produce a new entry composed of two entries, the base with the collocate. As such, the new composed entry is ready for processing. These LSFs signal a compositional syntax and a semi-compositional semantics. For instance, in (la), a heavy smoker is somebody who smokes a lot, and not a &quot;fat&quot; person. It has been shown that one cannot code in the lexicon all uses of heavy for heavy smoker, heavy drinker, .... Therefore, we do not have in our lexicon for heavy a sense for &quot;a lot&quot;, or a sense for &quot;strong&quot; to be composed with wine, etc... It is well known that such co-occurrences are lexically marked; if we allowed in our lexicons a proliferation of senses, multiplying ambiguities in analysis and choices in generation, then there would be no limit to what could be combined and we could end up generating *heavy coffee with the sense of &quot;strong&quot; for heavy, in our lexicon.</Paragraph> <Paragraph position="5"> The left hand-side of the rule LSFIntensity specifies an &quot;Intensity-Attribute&quot; applied to an event which accepts aspectual features of duration. In (la), the event is smoke. The LSFIntensity also provides the syntax-semantic interface, allowing for an Adj-Noun construction to be either predicative (the car is red) or attributive (the red car). We need therefore to restrict the co-occurrence to the Attributive use only, as the predicative use is not allowed: (the smoker is heavy) has a literal meaning or figurative, but not collocational.</Paragraph> <Paragraph position="6"> In (lb) again, there is no sense in the dictionary for pay which would mean concentrate. The rule LS-FOper makes the verb a verbal operator. No further restriction is required.</Paragraph> <Paragraph position="7"> Restricted lexical co-occurrence The semantics of the combination of the entries is compositional. In other words, there are entries in the lexicon for the base and the collocate, with the same senses as in the co-occurrence. Therefore, we can directly refer to the senses of the co-occurring words. What we are capturing here is a lexical idiosyncrasy or in other words, we specify that we should prefer this particular combination of words. This is useful for analysis, where it can help disambiguate a sense, and is most relevant for generation; it can be viewed as a preference among the paradigmatic family of the co-occurrence.</Paragraph> <Paragraph position="8"> \[base: #0, collocate: \[key: &quot;student&quot;, sense: nl, freq: \[value: 9\]\]\]\] ...\] In examples (2), the LSFSyn produces a new entry composed of two or more entries. As such, the new entry is ready for processing. LSFSyn signals a compositional syntax and a compositional semantics, and restricts the use of lexemes to be used in the composition. We can directly refer to the sense of the collocate, as it is part of the lexicon.</Paragraph> <Paragraph position="9"> In (2a) the entry for truth specifies one co-occurrence (plain truth), where the sense of plain here is adj2 (obvious), and not say adj3 (flat). The syntagmatic expression inherits all the zones of the entry for &quot;plain&quot;, sense adj2, we only code here the irregularities. For instance, &quot;plain&quot; can be used as &quot;plainer .... plainest&quot; in its &quot;plain&quot; sense in its adj2 entry, but not as such within the lexical co-occurrence &quot;*plainer truth&quot;, &quot;*plainest truth&quot;, we therefore must block it in the collocate, as expressed in (comp: no, superh no). In other words, we will not generate &quot;plainer/plainest truth&quot;. Examples (2b) and (2c) illustrate complex entries as there is no direct grammatical dependency between the base and the collocate. In (2b) for instance, we prefer to associate teacher in the context of a pupil rather than any other element belonging to the paradigmatic family of teacher such as professor, instructor. Formally, there is no difference between the two types of co-occurrences. In both cases, we specify the base (which is the word described in the en- null try itself), the collocate, the frequency of the co-occurrence in some corpus, and the LSF which links the base with the collocate. Using the formalism of typed feature structures, both cases are of type Co-occurrence as defined below: Co-occurrence = \[base: Entry, collocate: Entry, freq: Frequency\] ;</Paragraph> </Section> <Section position="2" start_page="1330" end_page="1330" type="sub_section"> <SectionTitle> 3.1 Processing of Syntagrnatic Relations </SectionTitle> <Paragraph position="0"> We utilize an efficient constraint-based control mechanism called Hunter-Gatherer (HG) (Beale, 1997).</Paragraph> <Paragraph position="1"> HG allows us to mark certain compositions as being dependent on each other and then forget about h + them. Thus, once we have two lexicon entries bitter that we know go together, HG will ensure that heavy they do. HG also gives preference to co-occurring big compositions. In analysis, meaning representations constructed using co-occurrences are preferred over v + those that are not, and, in generation, realizations oppose involving co-occurrences are preferred over equally oblige correct, but non-cooccurring realizations, r The real work in processing is making sure that we have the correct two entries to put together. In restriated semantic co-occurrences, the co-occurrence does not have the correct sense in the lexicon. For example, when the phrase heavy smoker is encountered, the lexicon entry for heavy would not contain the correct sense. (la) could be used to create the correct entry. In (la), the entry for smoker contains the key, or trigger, heavy. This signals the analyzer to produce another sense for heavy smoker. This sense will contain the same syntactic information present in the &quot;old&quot; heavy, except for any modifications listed in the &quot;gram&quot; section (see (la)). The semantics of the new sense comes directly from the LSF. Generation works the same, except the trigger is different. The input to generation will be a SMOKE event along with an Intensity-Attribute.</Paragraph> <Paragraph position="2"> (la), which would be used to realize the SMOKE event, would trigger LSFIntensify which has the Intensity-Attribute in the left hand-side, thus confirming the production of heavy.</Paragraph> <Paragraph position="3"> Restricted lexical co-occurrences are easier in the v + N sense that the correct entry already exists in the lexicon. The analyzer/generator simply needs to detect the co-occurrence and add the constraint that the N + N corresponding senses be used together. In examples like (2b), there is no direct grammatical or semantic relationship between the words that co-occur. Thus, the entire clause, sentence or even text may have to be searched for the co-occurrence. In practice, we limit such searches to the sentence level.</Paragraph> <Paragraph position="4"> 7The selection of co-occurrences is part of the lexical process, in other words, if there are reasons not to choose a co-occurrence because of the presence of modifiers or because of stylistics reasons, the generator will not generate the cooccurrence. null</Paragraph> </Section> <Section position="3" start_page="1330" end_page="1331" type="sub_section"> <SectionTitle> 3.2 Acquisition of Syntagmatic Relations </SectionTitle> <Paragraph position="0"> The acquisition of syntagmatic relations is knowledge intensive as it requires human intervention. In order to minimize this cost we rely on conceptual tools such as lexical rules, on the LSF inheritance hierarchy.</Paragraph> <Paragraph position="1"> Lexical Rules in Acquisition The acquisition of restricted semantic co-occurrences can be minimized by detecting rules between different classes of co-occurrences (modulo presence of derived forms in the lexicon with same or subsumed semantics). Looking at the following example, N <=> V + Adv resentment resent bitterly smoker smoke heavily eater eat *bigly hdv <=> Adv + Adj-ed strongly strongly opposed morally morally obliged we see that after having acquired with human intervention co-occurrences belonging to the A + N class, we can use lexical rules to derive the V + Adv class and also Adv + Adj-ed class.</Paragraph> <Paragraph position="2"> Lexical rules are a useful conceptual tool to extend a dictionary. (Viegas et al., 1996) used derivational lexical rules to extend a Spanish lexicon. We apply their approach to the production of restricted semantic co-occurrences. Note that eat bigly will be produced but then rejected, as the form bigly does not exist in a dictionary. The rules overgenerate cooccurrences. This is a minor problem for analysis than for generation. To use these derived restricted co-occurrences in generation, the output of the lexical rule processor must be checked. This can be done in different ways: dictionary check, corpus check and ultimately human check.</Paragraph> <Paragraph position="3"> Other classes, such as the ones below can be extracted using lexico-statistical tools, such as in (Smadja, 1993), and then checked by a human.</Paragraph> <Paragraph position="4"> pay attention, meet an obligation, commit an offence, ...</Paragraph> <Paragraph position="5"> dance marathon, marriage ceremony object of derision ....</Paragraph> <Paragraph position="6"> LSFs and Inheritance We take advantage of 1) the semantics encoded in the lexemes, and 2) an inheritance hierarchy of LSFs. We illustrate briefly this notion of LSF inheritance hierarchy. For instance, the left hand-side of LSFChangeState specifies that it applies to foods (solid or liquid) which are human processed, and produces the collocates rancid, rancio (Spanish). Therefore it could apply to milk, butter, or wine. The rule would end up producing rancid milk, rancid butter, or vino rancio (rancid wine) which is fine in Spanish. We therefore need to further distinguish LSFChangeState into LSFChangeStateSolid and LSFChangeStateLiquid.</Paragraph> <Paragraph position="7"> This restricts the application of the rule to produce rancid butter, by going down the hierarchy. This enables us to factor out information common to several entries, and can be applied to both types of co-occurrences. We only have to code in the co-occurrence information relevant to the combination, the rest is inherited from its entry in the dictionary.</Paragraph> </Section> </Section> class="xml-element"></Paper>