File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/p06-1134_metho.xml
Size: 24,127 bytes
Last Modified: 2025-10-06 14:10:22
<?xml version="1.0" standalone="yes"?> <Paper uid="P06-1134"> <Title>Sydney, July 2006. c(c)2006 Association for Computational Linguistics Word Sense and Subjectivity</Title> <Section position="5" start_page="1065" end_page="1067" type="metho"> <SectionTitle> 3 Human Judgment of Word Sense </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="1065" end_page="1066" type="sub_section"> <SectionTitle> Subjectivity </SectionTitle> <Paragraph position="0"> To explore our hypothesis that subjectivity may be associated with word senses, we developed a manual annotation scheme for assigning subjectivity labels to WordNet senses,3 and performed an inter-annotator agreement study to assess its reliability. Senses are classified as S(ubjective), O(bjective), or B(oth). Classifying a sense as S means that, when the sense is used in a text or conversation, we expect it to express subjectivity; we also expect the phrase or sentence containing it to be subjective.</Paragraph> <Paragraph position="1"> We saw a number of subjective expressions in Section 2. A subset is repeated here, along with relevant WordNet senses. In the display of each sense, the first part shows the synset, gloss, and any examples. The second part (marked with =>) shows the immediate hypernym.</Paragraph> <Paragraph position="2"> His alarm grew.</Paragraph> <Paragraph position="3"> alarm, dismay, consternation - (fear resulting from the awareness of danger) => fear, fearfulness, fright - (an emotion experienced in anticipation of some specific pain or danger (usually accompanied by a desire to flee or fight)) He was boiling with anger.</Paragraph> <Paragraph position="4"> seethe, boil - (be in an agitated emotional state; &quot;The customer was seething with anger&quot;) => be - (have the quality of being; (copula, used with an adjective or a predicate noun); &quot;John is rich&quot;; &quot;This is not a good answer&quot;) What's the catch? catch - (a hidden drawback; &quot;it sounds good but what's the catch?&quot;) => drawback - (the quality of being a hindrance; &quot;he pointed out all the drawbacks to my plan&quot;) That doctor is a quack.</Paragraph> <Paragraph position="5"> quack - (an untrained person who pretends to be a physician and who dispenses medical advice) => doctor, doc, physician, MD, Dr., medico Before specifying what we mean by an objective sense, we give examples.</Paragraph> <Paragraph position="6"> The alarm went off.</Paragraph> <Paragraph position="7"> alarm, warning device, alarm system - (a device that signals the occurrence of some undesirable event) => device - (an instrumentality invented for a particular purpose; &quot;the device is small enough to wear on your wrist&quot;; &quot;a device intended to conserve water&quot;) The water boiled.</Paragraph> <Paragraph position="8"> boil - (come to the boiling point and change from a liquid to vapor; &quot;Water boils at 100 degrees Celsius&quot;) => change state, turn - (undergo a transformation or a change of position or action; &quot;We turned from Socialism to Capitalism&quot;; &quot;The people turned against the President when he stole the election&quot;) He sold his catch at the market.</Paragraph> <Paragraph position="9"> catch, haul - (the quantity that was caught; &quot;the catch was only 10 fish&quot;) => indefinite quantity - (an estimated quantity) The duck's quack was loud and brief.</Paragraph> <Paragraph position="10"> quack - (the harsh sound of a duck) => sound - (the sudden occurrence of an audible event; &quot;the sound awakened them&quot;) While we expect phrases or sentences containing subjective senses to be subjective, we do not necessarily expect phrases or sentences containing objective senses to be objective. Consider the following examples: Will someone shut that damn alarm off? Can't you even boil water? While these sentences contain objective senses of alarm and boil, the sentences are subjective nonetheless. But they are not subjective due to alarm and boil, but rather to punctuation, sentence forms, and other words in the sentence. Thus, classifying a sense as O means that, when the sense is used in a text or conversation, we do not expect it to express subjectivity and, if the phrase or sentence containing it is subjective, the subjectivity is due to something else.</Paragraph> <Paragraph position="11"> Finally, classifying a sense as B means it covers both subjective and objective usages, e.g.: absorb, suck, imbibe, soak up, sop up, suck up, draw, take in, take up - (take in, also metaphorically; &quot;The sponge absorbs water well&quot;; &quot;She drew strength from the minister's words&quot;) Manual subjectivity judgments were added to a total of 354 senses (64 words). One annotator, Judge 1 (a co-author), tagged all of them. A second annotator (Judge 2, who is not a co-author) tagged a subset for an agreement study, presented next.</Paragraph> </Section> <Section position="2" start_page="1066" end_page="1067" type="sub_section"> <SectionTitle> 3.1 Agreement Study </SectionTitle> <Paragraph position="0"> For the agreement study, Judges 1 and 2 independently annotated 32 words (138 senses). 16 words have both S and O senses and 16 do not (according to Judge 1). Among the 16 that do not have both S and O senses, 8 have only S senses and 8 have only O senses. All of the subsets are balanced between nouns and verbs. Table 1 shows the contingency table for the two annotators' judgments on this data. In addition to S, O, and B, the annotation Overall agreement is 85.5%, with a Kappa (k) value of 0.74. For 12.3% of the senses, at least one annotator's tag is U. If we consider these cases to be borderline and exclude them from the study, percent agreement increases to 95% and k rises to 0.90. Thus, annotator agreement is especially high when both are certain.</Paragraph> <Paragraph position="1"> Considering only the 16-word subset with both S and O senses (according to Judge 1), k is .75, and for the 16-word subset for which Judge 1 gave only S or only O senses, k is .73. Thus, the two subsets are of comparable difficulty.</Paragraph> <Paragraph position="2"> The two annotators also independently annotated the 20 ambiguous nouns (117 senses) of the SENSEVAL-3 English lexical sample task used in Section 5. For this tagging task, U tags were not allowed, to create a definitive gold standard for the experiments. Even so, the k value for them is 0.71, which is not substantially lower. The distributions of Judge 1's tags for all 20 words can be found in Table 3 below.</Paragraph> <Paragraph position="3"> We conclude this section with examples of disagreements that illustrate sources of uncertainty. First, uncertainty arises when subjective senses are missing from the dictionary. The labels for the senses of noun assault are (O:O,O:O,O:O,O:UO).4 For verb assault there is a subjective sense: attack, round, assail, lash out, snipe, assault (attack in speech or writing) &quot;The editors of the left-leaning paper attacked the new House Speaker&quot; However, there is no corresponding sense for 4I.e., the first three were labeled O by both annotators. For the fourth sense, the second annotator was not sure but was leaning toward O.</Paragraph> <Paragraph position="4"> noun assault. A missing sense may lead an annotator to try to see subjectivity in an objective sense. Second, uncertainty can arise in weighing hypernym against sense. It is fine for a synset to imply just S or O, while the hypernym implies both (the synset specializes the more general concept). However, consider the following, which was tagged (O:UB).</Paragraph> <Paragraph position="5"> attack - (a sudden occurrence of an uncontrollable condition; &quot;an attack of diarrhea&quot;) => affliction - (a cause of great suffering and distress) While the sense is only about the condition, the hypernym highlights subjective reactions to the condition. One annotator judged only the sense (giving tag O), while the second considered the hypernym as well (giving tag UB).</Paragraph> </Section> </Section> <Section position="6" start_page="1067" end_page="1069" type="metho"> <SectionTitle> 4 Automatic Assessment of Word Sense </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="1067" end_page="1068" type="sub_section"> <SectionTitle> Subjectivity </SectionTitle> <Paragraph position="0"> Encouraged by the results of the agreement study, we devised a method targeting the automatic annotation of word senses for subjectivity.</Paragraph> <Paragraph position="1"> The main idea behind our method is that we can derive information about a word sense based on information drawn from words that are distributionally similar to the given word sense. This idea relates to the unsupervised word sense ranking algorithm described in (McCarthy et al., 2004). Note, however, that (McCarthy et al., 2004) used the information about distributionally similar words to approximate corpus frequencies for word senses, whereas we target the estimation of a property of a given word sense (the &quot;subjectivity&quot;).</Paragraph> <Paragraph position="2"> Starting with a given ambiguous word w, we first find the distributionally similar words using the method of (Lin, 1998) applied to the automatically parsed texts of the British National Corpus. Let DSW = dsw1, dsw2, ..., dswn be the list of top-ranked distributionally similar words, sorted in decreasing order of their similarity.</Paragraph> <Paragraph position="3"> Next, for each sense wsi of the word w, we determine the similarity with each of the words in the list DSW, using a WordNet-based measure of semantic similarity (wnss). Although a large number of such word-to-word similarity measures exist, we chose to use the (Jiang and Conrath, 1997) measure, since it was found both to be efficient and to provide the best results in previous experiments involving word sense ranking (McCarthy et al., 2004)5. For distributionally similar words 5Note that unlike the above measure of distributional sim-Algorithm 1 Word Sense Subjectivity Score Input: Word sense wi Input: Distributionally similar words DSW = {dswj|j =</Paragraph> <Paragraph position="5"> 2: totalsim = 0 3: for j = 1 to n do 4: Instsj = all instances of dswj in the MPQA corpus 5: for k in Instsj do 6: if k is in a subj. expr. in MPQA corpus then 7: subj(wi) += sim(wi,dswj) 8: else if k is not in a subj. expr. in MPQA corpus then 9: subj(wi) -= sim(wi,dswj) 10: end if 11: totalsim += sim(wi,dswj) 12: end for 13: end for 14: subj(wi) = subj(wi) / totalsim that are themselves ambiguous, we use the sense that maximizes the similarity score. The similarity scores associated with each word dswj are normalized so that they add up to one across all possible senses of w, which results in a score described by the following formula:</Paragraph> <Paragraph position="7"> A selection process can also be applied so that a distributionally similar word belongs only to one sense. In this case, for a given sense wi we use only those distributionally similar words with whom wi has the highest similarity score across all the senses of w. We refer to this case as similarityselected, as opposed to similarity-all, which refers to the use of all distributionally similar words for all senses.</Paragraph> <Paragraph position="8"> Once we have a list of similar words associated with each sense wsi and the corresponding similarity scores sim(wsi, dswj), we use an annotated corpus to assign subjectivity scores to the senses.</Paragraph> <Paragraph position="9"> The corpus we use is the MPQA Opinion Corpus, which consists of over 10,000 sentences from the world press annotated for subjective expressions (all three types of subjective expressions described in Section 2).6 ilarity which measures similarity between words, rather than word senses, here we needed a similarity measure that also takes into account word senses as defined in a sense inventory such as WordNet.</Paragraph> <Paragraph position="10"> Algorithm 1 is our method for calculating sense subjectivity scores. The subjectivity score is a value in the interval [-1,+1] with +1 corresponding to highly subjective and -1 corresponding to highly objective. It is a sum of sim scores, where sim(wi,dswj) is added for each instance of dswj that is in a subjective expression, and subtracted for each instance that is not in a subjective expression. null Note that the annotations in the MPQA corpus are for subjective expressions in context. Thus, the data is somewhat noisy for our task, because, as discussed in Section 3, objective senses may appear in subjective expressions. Nonetheless, we hypothesized that subjective senses tend to appear more often in subjective expressions than objective senses do, and use the appearance of words in subjective expressions as evidence of sense subjectivity. null (Wiebe, 2000) also makes use of an annotated corpus, but in a different approach: given a word w and a set of distributionally similar words DSW, that method assigns a subjectivity score to w equal to the conditional probability that any member of DSW is in a subjective expression. Moreover, the end task of that work was to annotate words, while our end task is the more difficult problem of annotating word senses for subjectivity.</Paragraph> </Section> <Section position="2" start_page="1068" end_page="1069" type="sub_section"> <SectionTitle> 4.1 Evaluation </SectionTitle> <Paragraph position="0"> The evaluation of the algorithm is performed against the gold standard of 64 words (354 word senses) using Judge 1's annotations, as described in Section 3.</Paragraph> <Paragraph position="1"> For each sense of each word in the set of 64 ambiguous words, we use Algorithm 1 to determine a subjectivity score. A subjectivity label is then assigned depending on the value of this score with respect to a pre-selected threshold. While a threshold of 0 seems like a sensible choice, we perform the evaluation for different thresholds ranging across the [-1,+1] interval, and correspondingly determine the precision of the algorithm at different points of recall7. Note that the word senses for which none of the distributionally similar words are found in the MPQA corpus are not 7Specifically, in the list of word senses ranked by their subjectivity score, we assign a subjectivity label to the top N word senses. The precision is then determined as the number of correct subjectivity label assignments out of all N assignments, while the recall is measured as the correct subjective senses out of all the subjective senses in the gold standard data set. By varying the value of N from 1 to the total number of senses in the corpus, we can derive precision and recall curves.</Paragraph> <Paragraph position="2"> included in this evaluation (excluding 82 senses), since in this case a subjectivity score cannot be calculated. The evaluation is therefore performed on a total of 272 word senses.</Paragraph> <Paragraph position="3"> As a baseline, we use an &quot;informed&quot; random assignment of subjectivity labels, which randomly assigns S labels to word senses in the data set, such that the maximum number of S assignments equals the number of correct S labels in the gold standard data set. This baseline guarantees a maximum recall of 1 (which under true random conditions might not be achievable). Correspondingly, given the controlled distribution of S labels across the data set in the baseline setting, the precision is equal for all eleven recall points, and is determined as the total number of correct subjective assignments divided by the size of the data set8.</Paragraph> <Paragraph position="4"> and parameter settings There are two aspects of the sense subjectivity scoring algorithm that can influence the label assignment, and correspondingly their evaluation.</Paragraph> <Paragraph position="5"> First, as indicated above, after calculating the semantic similarity of the distributionally similar words with each sense, we can either use all the distributionally similar words for the calculation of the subjectivity score of each sense (similarityall), or we can use only those that lead to the highest similarity (similarity-selected). Interestingly, this aspect can drastically affect the algorithm accuracy. The setting where a distributionally similar word can belong only to one sense significantly improves the algorithm performance. Figure 1 plots the interpolated precision for eleven points of recall, for similarity-all, similarity-selected, and baseline. As shown in this figure, the precision-recall curves for our algorithm are clearly above the &quot;informed&quot; baseline, indicating the ability of our algorithm to automatically identify subjective word senses.</Paragraph> <Paragraph position="6"> Second, the number of distributionally similar words considered in the first stage of the algorithm can vary, and might therefore influence the output of the algorithm. We experiment with two different values, namely 100 and 160 top-ranked distributionally similar words. Table 2 shows the break-even points for the four different settings that were evaluated,9 with results that are almost double compared to the informed baseline. As it turns out, for weaker versions of the algorithm (i.e., similarity-all), the size of the set of distributionally similar words can significantly impact the performance of the algorithm. However, for the already improved similarity-selected algorithm version, this parameter does not seem to have influence, as similar results are obtained regardless of the number of distributionally similar words. This is in agreement with the finding of (McCarthy et al., 2004) that, in their word sense ranking method, a larger set of neighbors did not influence the algorithm accuracy.</Paragraph> </Section> </Section> <Section position="7" start_page="1069" end_page="1070" type="metho"> <SectionTitle> 5 Automatic Subjectivity Annotations for </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="1069" end_page="1070" type="sub_section"> <SectionTitle> Word Sense Disambiguation </SectionTitle> <Paragraph position="0"> The final question we address is concerned with the potential impact of subjectivity on the quality of a word sense classifier. To answer this question, we augment an existing data-driven word sense disambiguation system with a feature reflecting the subjectivity of the examples where the ambiguous word occurs, and evaluate the performance of the new subjectivity-aware classifier as compared to the traditional context-based sense classifier.</Paragraph> <Paragraph position="1"> We use a word sense disambiguation system that integrates both local and topical features.</Paragraph> <Paragraph position="2"> represents the value where precision and recall become equal. Specifically, we use the current word and its partof-speech, a local context of three words to the left and right of the ambiguous word, the parts-of-speech of the surrounding words, and a global context implemented through sense-specific keywords determined as a list of at most five words occurring at least three times in the contexts defining a certain word sense. This feature set is similar to the one used by (Ng and Lee, 1996), as well as by a number of SENSEVAL systems. The parameters for sense-specific keyword selection were determined through cross-fold validation on the training set. The features are integrated in a Naive Bayes classifier, which was selected mainly for its performance in previous work showing that it can lead to a state-of-the-art disambiguation system given the features we consider (Lee and Ng, 2002).</Paragraph> <Paragraph position="3"> The experiments are performed on the set of ambiguous nouns from the SENSEVAL-3 English lexical sample evaluation (Mihalcea et al., 2004). We use the rule-based subjective sentence classifier of (Riloff and Wiebe, 2003) to assign an S, O, or B label to all the training and test examples pertaining to these ambiguous words. This subjectivity annotation tool targets sentences, rather than words or paragraphs, and therefore the tool is fed with sentences. We also include a surrounding context of two additional sentences, because the classifier considers some contextual information.</Paragraph> <Paragraph position="4"> Our hypothesis motivating the use of a sentence-level subjectivity classifier is that instances of subjective senses are more likely to be in subjective sentences, and thus that sentence subjectivity is an informative feature for the disambiguation of words having both subjective and objective senses.</Paragraph> <Paragraph position="5"> For each ambiguous word, we perform two separate runs: one using the basic disambiguation system described earlier, and another using the subjectivity-aware system that includes the additional subjectivity feature. Table 3 shows the results obtained for these 20 nouns, including word sense disambiguation accuracy for the two different systems, the most frequent sense baseline, and the subjectivity/objectivity split among the word senses (according to Judge 1). The words in the top half of the table are the ones that have both S and O senses, and those in the bottom are the ones that do not. If we were to use Judge 2's tags instead of Judge 1's, only one word would change: source would move from the top to the bottom of the table.</Paragraph> <Paragraph position="6"> without subjectivity information, for the set of ambiguous nouns in SENSEVAL-3 For the words that have both S and O senses, the addition of the subjectivity feature alone can bring a significant error rate reduction of 4.3% (p < 0.05 paired t-test). Interestingly, no improvements are observed for the words with no subjective senses; on the contrary, the addition of the subjectivity feature results in a small degradation. Overall for the entire set of ambiguous words, the error reduction is measured at 2.2% (significant at p < 0.1 paired t-test).</Paragraph> <Paragraph position="7"> In almost all cases, the words with both S and O senses show improvement, while the others show small degradation or no change. This suggests that if a subjectivity label is available for the words in a lexical resource (e.g. using Algorithm 1 from Section 4), such information can be used to decide on using a subjectivity-aware system, thereby improving disambiguation accuracy.</Paragraph> <Paragraph position="8"> One of the exceptions is disc, which had a small benefit, despite not having any subjective senses.</Paragraph> <Paragraph position="9"> As it happens, the first sense of disc is phonograph record.</Paragraph> <Paragraph position="10"> phonograph record, phonograph recording, record, disk, disc, platter - (sound recording consisting of a disc with continuous grooves; formerly used to reproduce music by rotating while a phonograph needle tracked in the grooves) The improvement can be explained by observing that many of the training and test sentences containing this sense are labeled subjective by the classifier, and indeed this sense frequently occurs in subjective sentences such as &quot;This is anyway a stunning disc.&quot; Another exception is the noun plan, which did not benefit from the subjectivity feature, although it does have a subjective sense. This can perhaps be explained by the data set for this word, which seems to be particularly difficult, as the basic classifier itself could not improve over the most frequent sense baseline.</Paragraph> <Paragraph position="11"> The other word that did not benefit from the subjectivity feature is the noun source, for which its only subjective sense did not appear in the sense-annotated data, leading therefore to an &quot;objective only&quot; set of examples.</Paragraph> </Section> </Section> class="xml-element"></Paper>