File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/02/c02-1061_evalu.xml
Size: 10,016 bytes
Last Modified: 2025-10-06 13:58:44
<?xml version="1.0" standalone="yes"?> <Paper uid="C02-1061"> <Title>Antonymy and Conceptual Vectors</Title> <Section position="7" start_page="0" end_page="0" type="evalu"> <SectionTitle> FICATION and ORDER. </SectionTitle> <Paragraph position="0"> The function AntiC(Ci;Vcontext) returns for a given concept Ci and the context de ned by Vcontext , the complementary antonym vector in the list.</Paragraph> <Section position="1" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 4.3 Construction of the antonym </SectionTitle> <Paragraph position="0"> vector: the Anti Function We de ne the relative antonymy function AntiR(A;C) which returns the opposite vector of A in the context C and the absolute antonymy function AntiA(A) = AntiR(A;A). The usage of AntiA is delicate because the lexical item is considered as being its own context. We will see in 4.4.1 that this may cause real problems because of sense selection. We should stress now on the construction of the antonym vector from two conceptual vectors: Vitem, for 5 is the normalised sum V = A Bjvi = xi+yi kVk the item we want to oppose and the other, Vc, for the context (referent).</Paragraph> </Section> <Section position="2" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 4.3.2 Construction of the Antonym Vector </SectionTitle> <Paragraph position="0"> The method is to focus on the salient notions in Vitem and Vc. If these notions can be opposed then the antonym should have the inverse ideas in the same proportion. That leads us to de ne this function as follows:</Paragraph> <Paragraph position="2"> We crafted the de nition of the weight P after several experiments. We noticed that the function couldn't be symmetric (we cannot reasonably have AntiR(V(,hot-),V(,temperature-)) = AntiR(V(,temperature-),V(,hot-))). That is why we introduce this power, to stress more on the ideas present in the vector we want to oppose.</Paragraph> <Paragraph position="3"> We note also that the more conceptual6 the vector is, the more important this power should be.</Paragraph> <Paragraph position="4"> That is why the power is the variation coe cient7 which is a good clue for \conceptuality&quot;. To nish, we introduce this function max because an idea presents in the item, even if this idea is not present in the referent, has to be opposed in the antonym. For example, if we want the antonym of ,cold- in the ,temperature- context, the weight of ,cold- has to be important even if it is not present in ,temperature-.</Paragraph> </Section> <Section position="3" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 4.4 Lexical Items and Vectors: Problem and Solutions </SectionTitle> <Paragraph position="0"> The goal of the functions AntiLex is to return antonym of a lexical item. They are de ned with the Anti function. So, we have to use tools which allow the passage between lexical items and vectors. This transition is di cult because of polysemy, i.e. how to choose the right relation between an item and a vector. In other words, how to choose the good meaning of the word.</Paragraph> <Paragraph position="1"> 4.4.1 Transition lexical items !</Paragraph> </Section> <Section position="4" start_page="0" end_page="0" type="sub_section"> <SectionTitle> Conceptual Vectors </SectionTitle> <Paragraph position="0"> As said before, antonymy is relative to a context. In some cases, this context cannot be sufcient to select a symmetry axis for antonymy.</Paragraph> <Paragraph position="1"> standart deviation and as the arithmetic mean. To catch the searched meaning of the item and, if it is di erent from the context, to catch the selection of the meaning of the referent, we use the strong contextualisation method. It computes, for a given item, a vector. In this vector, some meanings are favoured against others according to the context. Like this, the context vector is also contextualised.</Paragraph> <Paragraph position="2"> This contextualisation shows the problem caused by the absolute antonymy function Anti R. In this case, the method will compute the vector of the word item in the context item. This is not a problem if item has only one de nition because, in this case, the strong contextualisation has no e ect. Otherwise, the returned conceptual vector will stress on the main idea it contains which one is not necessary the appropriate one.</Paragraph> </Section> <Section position="5" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 4.4.2 Transition Conceptual Vectors ! Lexical Items </SectionTitle> <Paragraph position="0"> This transition is easier. We just have to compute the neighbourhood of the antonym vector Vant to obtain the items which are in thematic antonymy with Vitem. With this method, we have, for instance: It is not important to contextualise the concept LIFE because we can consider that, for every context, the opposite vector is the same. In complementary antonymy, the closest item is DEATH. This result looks satisfactory. We can see that the distance between the antonymy vector and DEATH is not null. It is because our method is not and cannot be an exact method.</Paragraph> <Paragraph position="1"> The goal of our function is to build the best (closest) antonymy vector it is possible to have. The construction of the generative vectors is the second explanation. Generative vectors are interdependent. Their construction is based on an ontology. To take care of this fact, we don't have boolean vectors, with which, we would have exactly the same vector. The more polysemic the term is, the farthest the closest item is, as we can see it in the rst two examples.</Paragraph> <Paragraph position="2"> We cannot consider, even if the potential of antonymy measure is correct, the closest lexical item from Vanti as the antonym. We have to consider morphological features. Simply speaking, if the antonym of a verb is wanted, the result would be better if a verb is caught.</Paragraph> </Section> <Section position="6" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 4.5 Antonymy Evaluation Measure </SectionTitle> <Paragraph position="0"> Besides computing an antonym vector, it seems relevant to assess wether two lexical items can be antonyms. To give an answer to this question, we have created a measure of antonymy evaluation. Let A and B be two vectors.</Paragraph> <Paragraph position="1"> The question is precisely to know if they can reasonably be antonyms in the context of C.</Paragraph> <Paragraph position="2"> The antonymy measure MantiEval is the angle between the sum of A and B and the sum of AnticR(A;C) and AnticR(B;C). Thus, we evaluation measure MantiEval The antonymy measure is a pseudo-distance.</Paragraph> <Paragraph position="3"> It veri es the properties of re exivity, symmetry and triangular inequality only for the subset of items which doesn't accept antonyms. In this case, notwithstanding the noise level, the measure is equal to the angular distance. In the general case, it doesn't verify re exivity. The conceptual vector components are positive and we have the property: Distanti 2 [0; 2 ]. The smaller the measure, the more 'antonyms' the two lexical items are. However, it would be a mistake to consider that two synonyms would be at a distance of about 2 . Two lexical items at 2 have not much in common 8. We would rather see here the illustration that two antonyms share some ideas, speci cally those which are not opposable or those which are opposable with a strong activation. Only speci c activated concepts would participate in the opposition. A distance of 2 between two items should rather be interpreted as these two items do not share much idea, a kind of anti-synonymy. This result con rms the fact that antonymy is not the exact inverse of synonymy but looks more like a 'negative synonymy' where items remains quite related. To sum up, the antonym of w is not a word that doesn't share ideas with w, but a word that opposes some features of w.</Paragraph> <Paragraph position="4"> In the following examples, the context has been ommited for clarity sake. In these cases, the context is the sum of the vectors of the two items.</Paragraph> <Paragraph position="5"> The above examples con rm what presented. Concepts EXISTENCE and NON-EXISTENCE are very strong antonyms in complementary antonymy. The e ects of the polysemy may explain that the lexical items,existence-and ,non-existence- are less antonyms than their related concepts. In complementary antonymy, CAR is its own antonym. The antonymy measure between CAR and EXISTENCE is an example of our previous remark about vectors sharing few ideas and that around =2 this measure is close to the angular distance (we have DA(existence;car) = 1:464.). We could consider of using this function to look in a conceptual lexicon for the best antonyms. However, the computation cost (around a minute on a P4 at 1.3 GHz) would be prohibitive.</Paragraph> <Paragraph position="6"> 8This case is mostly theorical, as there is no language where two lexical items are without any possible relation. 5 Action on learning and method evaluation The function is now used in the learning process. We can use the evaluation measure to show the increase of coherence between terms: MantiEvalC new old ,existence-, ,non-existence- 0:33 0:44 ,existence-, ,car- 1:1 1:06 ,car-, ,car- 0:3 0;407 There is no change in concepts because they are not learned. In the opposite, the antonymy evaluation measure is better on items. The exemple shows that ,existence- and ,non-existence- have been largely modi ed. Now, the two items are stronger antonyms than before and the vector basis is more coherent. Of course, we can test these results on the 71000 lexical items which have been modi ed more or less directly by the antonymy function. We have run the test on about 10% of the concerned items and found an improvement of the angular distance through MantiEvalC ranking to 0.1 radian.</Paragraph> </Section> </Section> class="xml-element"></Paper>