File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/p06-3010_metho.xml

Size: 15,401 bytes

Last Modified: 2025-10-06 14:10:29

<?xml version="1.0" standalone="yes"?>
<Paper uid="P06-3010">
  <Title>Sydney, July 2006. c(c)2006 Association for Computational Linguistics A Hybrid Relational Approach for WSD - First Results</Title>
  <Section position="4" start_page="0" end_page="55" type="metho">
    <SectionTitle>
2 Related work
</SectionTitle>
    <Paragraph position="0"> Many approaches have been proposed for WSD, but only a few are designed for specific applications, such as MT. Existing multilingual approaches can be classified as (a) knowledge-based approaches, which make use of linguistic knowledge manually codified or extracted from lexical resources (Pedersen, 1997; Dorr and Katsova, 1998); (b) corpus-based approaches, which make use of knowledge automatically acquired from text using machine learning algorithms (Lee, 2002; Vickrey et al., 2005); and (c) hybrid approaches, which employ techniques from the two other approaches (Zinovjeva, 2000).</Paragraph>
    <Paragraph position="1">  Hybrid approaches potentially explore the advantages of both other strategies, yielding accurate and comprehensive systems. However, they are quite rare, even in monolingual contexts (Stevenson and Wilks, 2001, e.g.), and they are not able to integrate and use knowledge coming from corpus and other resources during the learning process.</Paragraph>
    <Paragraph position="2"> In fact, current hybrid approaches usually employ knowledge sources in pre-processing steps, and then use machine learning algorithms to combine disambiguation evidence from those sources. This strategy is necessary due to the limitations of the formalism used to represent examples in the machine learning process: the propositional formalism, which structures data in attribute-value vectors. Even though it is known that great part of the knowledge regarding to languages is relational (e.g., syntactic or semantic relations among words in a sentence) (Mooney, 1997), the propositional formalism traditionally employed makes unfeasible the representation of substantial relational knowledge and the use of this knowledge during the learning process.</Paragraph>
    <Paragraph position="3"> According to the attribute-value representation, one attribute has to be created for every feature, and the same structure has to be used to characterize all the examples. In order to represent the syntactic relations between every pair of words in a sentence, e.g., it will be necessary to create at least one attribute for each possible relation (Figure 1). This would result in an enormous number of attributes, since the possibilities can be many in distinct sentences.</Paragraph>
    <Paragraph position="4"> Also, there could be more than one pair with the same relation.</Paragraph>
    <Paragraph position="5"> Sentence: John gave to Mary a big cake.</Paragraph>
    <Paragraph position="7"> Given that some types of information are not available for certain instances, many attributes will have null values. Consequently, the representation of the sample data set tends to become highly sparse. It is well-known that sparseness on data ensue serious problems to the machine learning process in general (Brown and Kros, 2003). Certainly, data will become sparser as more knowledge about the examples is considered, and the problem will be even more critical if relational knowledge is used.</Paragraph>
    <Paragraph position="8"> Therefore, at least three relevant problems arise from the use of a propositional representation in corpus-based and hybrid approaches: (a) the limitation on its expressiveness power, making it difficult to represent relational and other more complex knowledge; (b) the sparseness in data; and (c) the lack of integration of the evidences provided by examples and linguistic knowledge.</Paragraph>
  </Section>
  <Section position="5" start_page="55" end_page="56" type="metho">
    <SectionTitle>
3 A hybrid relational approach for WSD
</SectionTitle>
    <Paragraph position="0"> We propose a novel hybrid approach for WSD based on a relational representation of both examples and linguistic knowledge. This representation is considerably more expressive, avoids sparseness in data, and allows the use of these two types of evidence during the learning process.</Paragraph>
    <Section position="1" start_page="55" end_page="55" type="sub_section">
      <SectionTitle>
3.1 Sample data
</SectionTitle>
      <Paragraph position="0"> We address the disambiguation of 7 verbs selected according to the results of a corpus study (Specia, 2005). To build our sample corpus, we collected 200 English sentences containing each of the verbs from a corpus comprising fiction books. In a previous step, each sentence was automatically tagged with the translation of the verb, part-of-speech and lemmas of all words, and subject-object syntactic relations with respect to the verb (Specia et al., 2005). The set of verbs, their possible translations, and the accuracy of the most frequent translation are shown in Table 1.</Paragraph>
    </Section>
    <Section position="2" start_page="55" end_page="56" type="sub_section">
      <SectionTitle>
3.2 Inductive Logic Programming
</SectionTitle>
      <Paragraph position="0"> We utilize Inductive Logic Programming (ILP) (Muggleton, 1991) to explore relational machine learning. ILP employs techniques of both Machine Learning and Logic Programming to build first-order logic theories from examples and background knowledge, which are also represented by means of first-order logic clauses. It allows the efficient representation of substantial knowledge about the problem, and allows this knowledge to be used during the learning process. The general idea underlying ILP is: Given:  - a set of positive and negative examples E = E+ [?] E- a predicate p specifying the target relation to be learned  - knowledge K of a certain domain, described according to a language Lk, which specifies which other predicates qi can be part of the definition of p. The goal is: to induce a hypothesis (or theory) h for p, with relation to E and K, which covers most of the E+, without covering the E-, that is, K [?] h E+ and K [?] h E-.</Paragraph>
      <Paragraph position="1"> To implement our approach we chose Aleph (Srinivasan, 2000), an ILP system which provides a complete relational learning inference engine and various customization options. We used the following options, which correspond to the Progol mode (Muggleton, 1995): bottom-up search, non-incremental and non-interactive learning, and learning based only on positive examples. Fundamentally, the default inference engine induces a theory  iteratively by means of the following steps: 1. One instance is randomly selected to be generalized. null 2. A more specific clause (bottom clause) explaining the selected example is built. It consists of the representation of all knowledge about that example. null 3. A clause that is more generic than the bottom clause is searched, by means of search and generalization strategies (best first search, e.g.). 4. The best clause found is added to the theory  and the examples covered by such clause are removed from the sample set. If there are more instances in the sample set, return to step 1.</Paragraph>
    </Section>
    <Section position="3" start_page="56" end_page="56" type="sub_section">
      <SectionTitle>
3.3 Knowledge sources
</SectionTitle>
      <Paragraph position="0"> The choice, acquisition, and representation of syntactic, semantic, and pragmatic knowledge sources (KSs) were our main concerns at this stage. The general architecture of the system, showing our 7 groups of KSs, is illustrated in Figure 2.</Paragraph>
      <Paragraph position="1"> Several of our KSs have been traditionally employed in monolingual WSD (e.g., Agirre and Stevenson, 2006), while other are specific for MT.</Paragraph>
      <Paragraph position="2"> Some of them were extracted from our sample corpus (Section 3.1), while others were automatically extracted from lexical resources1. In what follows, we briefly describe, give the generic definition and examples of each KS, taking sentence (1), for the &amp;quot;to come&amp;quot;, as example.</Paragraph>
      <Paragraph position="3"> (1) &amp;quot;If there is such a thing as reincarnation, I would not mind coming back as a squirrel&amp;quot;.</Paragraph>
      <Paragraph position="4"> KS1: Bag-of-words - a list of +-5 words (lemmas) surrounding the verb for every sentence (sent_id).</Paragraph>
    </Section>
  </Section>
  <Section position="6" start_page="56" end_page="58" type="metho">
    <SectionTitle>
1 Michaelis(r) and Password(r) English-Portuguese Dictionar-
</SectionTitle>
    <Paragraph position="0"> ies, LDOCE (Procter, 1978), and WordNet (Miller, 1990).</Paragraph>
    <Paragraph position="1"> KS2: Part-of-speech (POS) tags of content words in a +-5 word window surrounding the verb.</Paragraph>
    <Paragraph position="2"> KS3: Subject and object syntactic relations with respect to the verb under consideration.</Paragraph>
    <Paragraph position="3"> KS4: Context words represented by 11 collocations with respect to the verb: 1st preposition to the right, 1st and 2nd words to the left and right, 1st noun, 1st adjective, and 1st verb to the left and right.</Paragraph>
    <Paragraph position="4"> KS5: Selectional restrictions of verbs and semantic features of their arguments, given by LDOCE. Verb restrictions are expressed by lists of semantic features required for their subject and object, while these arguments are represented with their features.</Paragraph>
    <Paragraph position="5"> The hierarchy for LDOCE feature types defined by Bruce and Guthrie (1992) is used to account for restrictions established by the verb for features that are more generic than the features describing the words in the subject / object roles in the sentence. Ontological relations extracted from WordNet (Miller, 1990) are also used: if the restrictions imposed by the verb are not part of the description of its arguments, synonyms or hypernyms of those arguments that meet the restrictions are considered. KS6: Idioms and phrasal verbs, indicating that the verb occurring in a given context could have a specific translation.</Paragraph>
    <Paragraph position="6"> bag(sent_id, list_of_words).</Paragraph>
    <Paragraph position="7"> bag(sent1,[mind, not, will, i, reincarnation, back, as, a, squirrel]) has_pos(sent_id, word_position, pos).</Paragraph>
    <Paragraph position="8"> has_pos(sent1, first_content_word_left, nn).</Paragraph>
    <Paragraph position="9"> has_pos(sent1, second_content_word_left, vbp).</Paragraph>
    <Paragraph position="10"> ...</Paragraph>
    <Paragraph position="11"> has_rel(sent_id, subject_word, object_word).</Paragraph>
    <Paragraph position="12"> has_rel(sent1, i, nil).</Paragraph>
    <Paragraph position="13"> rest(verb, subj_restrition, obj_ restriction ,translation) rest(come, [], nil, voltar).</Paragraph>
    <Paragraph position="14"> rest(come, [animal,human], nil, vir). ...</Paragraph>
    <Paragraph position="15"> feature(noun, sense_id, features).</Paragraph>
    <Paragraph position="16"> feature(reincarnation, 0_1, [abstract]).</Paragraph>
    <Paragraph position="17"> feature(squirrel, 0_0, [animal]).</Paragraph>
    <Paragraph position="18"> has_collocation(sent_id, collocation_type, collocation) has_collocation(sent1, word_right_1, back).</Paragraph>
    <Paragraph position="19"> has_collocation(sent1, word_left_1, mind). ...</Paragraph>
    <Paragraph position="20"> relation(word1, sense_id1, word2 ,sense_id2).</Paragraph>
    <Paragraph position="21"> hyper(reincarnation, 1, avatar, 1).</Paragraph>
    <Paragraph position="22"> synon(rebirth, 2, reincarnation, -1).</Paragraph>
    <Paragraph position="23">  KS7: A count of the overlapping words in dictionary definitions for the possible translations of the verb and the words surrounding it in the sentence, relative to the total number of words. The representation of all KSs for each example is independent of the other examples. Therefore, the number of features can be different for different sentences, without resulting in sparseness in data. In order to use the KSs, we created a set of rules for each KS. These rules are not dependent on particular words or instances. They can be very simple, as in the example shown below for bag-of-words, or more complex, e.g., for selectional restrictions. Therefore, KSs are represented by means of rules and facts (rules without conditions), which can be intensional, i.e., it can contain variables, making the representation more expressive.</Paragraph>
    <Paragraph position="24"> Besides the KSs, the other main input to the system is the set of examples. Since all knowledge about them is expressed by the KSs, the representation of examples is very simple, containing only the example identifier (of the sentence, in our case, such as, &amp;quot;sent1&amp;quot;), and the class of that example (in  exp(verbal_expression, translation) exp('come about', acontecer).</Paragraph>
    <Paragraph position="25"> exp('come about', chegar). ...</Paragraph>
    <Paragraph position="26"> highest_overlap(sent_id, translation, overlapping). highest_overlap(sent1, voltar, 0.222222).</Paragraph>
    <Paragraph position="27"> highest_overlap(sent2, chegar, 0.0857143).</Paragraph>
    <Paragraph position="28"> has_bag(Sent,Word) :bag(Sent,List), member(Word,List).</Paragraph>
    <Paragraph position="29">  our case, the translation of the verb in that sentence). null In Aleph's default induction mode, the order of the training examples plays an important role. One example is taken at a time, according to its order in the training set, and a rule can be produced based on that example. Since examples covered by a certain rule are removed from the training set, certain examples will not be used to produce rules. Induction methods employing different strategies in which the order is irrelevant will be exploited in future work.</Paragraph>
    <Paragraph position="30"> In order to produce a theory, Aleph also requires &amp;quot;mode definitions&amp;quot;, i.e., the specification of the predicates p and q (Section 3.2). For example, the first mode definition below states that the predicate p to be learned will consist of a clause sense(sent_id, translation), which can be instantiated only once (1). The other two definitions state the predicates q, has_colloc(sent_id, colloc_id, colloc), with at most 11 instantiations, and has_bag(sent_id, word), with at most 10 instantiations. That is, the predicates in the conditional piece of the rules in the theory can consist of up to 11 collocations and a bag of up to 10 words. One mode definition must be created for each KS.</Paragraph>
    <Paragraph position="31"> Based on the examples and background knowledge, the inference engine will produce a set of symbolic rules. Some of the rules induced for the verb &amp;quot;to come&amp;quot;, e.g., are illustrated in the box below. null The first rule checks if the first preposition to the right of the verb is &amp;quot;out&amp;quot;, assigning the translation &amp;quot;sair&amp;quot; if so. The second rule verifies if the subject-object arguments satisfy the verb restrictions, i.e, if the subject has the features &amp;quot;animal&amp;quot; or &amp;quot;human&amp;quot;, and the object has the feature &amp;quot;concrete&amp;quot;. Alternatively, it verifies if the sentence contains the phrasal verb &amp;quot;come at&amp;quot;. Rule 3 also tests the verb selectional restrictions and the first word to the right of the verb.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML