File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/05/p05-2008_metho.xml

Size: 16,552 bytes

Last Modified: 2025-10-06 14:09:47

<?xml version="1.0" standalone="yes"?>
<Paper uid="P05-2008">
  <Title>Using Emoticons to reduce Dependency in Machine Learning Techniques for Sentiment Classification</Title>
  <Section position="4" start_page="43" end_page="45" type="metho">
    <SectionTitle>
2 Dependencies in Sentiment Classification
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="43" end_page="43" type="sub_section">
      <SectionTitle>
2.1 Experimental Setup
</SectionTitle>
      <Paragraph position="0"> In this section, we describe experiments we have carried out to determine the influence of domain, topic and time on machine learning based sentiment classification. The experiments use our own implementation of a Na&amp;quot;ive Bayes classifier and Joachim's (1999) SVMlight implementation of a Support Vector Machine classifier. The models were trained using unigram features, accounting for the presence of feature types in a document, rather than the frequency, as Pang et al. (2002) found that this is the most effective strategy for sentiment classification.</Paragraph>
      <Paragraph position="1"> When training and testing on the same set, the mean accuracy is determined using three-fold crossvalidation. In each case, we use a paired-sample t-test over the set of test documents to determine whether the results produced by one classifier are statistically significantly better than those from another, at a confidence interval of at least 95%.</Paragraph>
    </Section>
    <Section position="2" start_page="43" end_page="43" type="sub_section">
      <SectionTitle>
2.2 Topic Dependency
</SectionTitle>
      <Paragraph position="0"> Engstr&amp;quot;om (2004) demonstrated how machine-learning techniques for sentiment classification can be topic dependent. However, that study focused on a three-way classification (positive, negative and neutral). In this paper, for uniformity across different data sets, we focus on only positive and negative sentiment. This experiment also provides an opportunity to evaluate the Na&amp;quot;ive Bayes classifier as the previous work used SVMs.</Paragraph>
      <Paragraph position="1"> We use subsets of a Newswire dataset (kindly pro- null Accuracies, in percent. Best performance on a test set for each model is highlighted in bold.</Paragraph>
      <Paragraph position="2"> vided by Roy Lipski of Infonic Ltd.) that relate to the topics of Finance (FIN), Mergers and Aquisitions (M&amp;A) and a mixture of both topics (MIX). Each subset contains further subsets of articles of positive and negative sentiment (selected by independent trained annotators), each containing 100 stories. We trained a model on a dataset relating to one topic and tested that model using the other topics. Figure 1 shows the results of this experiment. The tendency seems to be that performance in a given topic is best if the training data is from the same topic. For example, the Finance-trained SVM classifier achieved an accuracy of 78.8% against articles from Finance, but only 72.7% when predicting the sentiment of articles from M&amp;A. However, statistical testing showed that the results are not significantly different when training on one topic and testing on another. It is interesting to note, though, that providing a dataset of mixed topics (the sub-corpus MIX) does not necessarily reduce topic dependency.</Paragraph>
      <Paragraph position="3"> Indeed, the performance of the classifiers suffers a great deal when training on mixed data (confidence interval 95%).</Paragraph>
    </Section>
    <Section position="3" start_page="43" end_page="43" type="sub_section">
      <SectionTitle>
2.3 Domain Dependency
</SectionTitle>
      <Paragraph position="0"> We conducted an experiment to compare the accuracy when training a classifier on one domain (newswire articles or movie reviews from the Polarity 1.0 dataset used by Pang et al. (2002)) and testing on the other domain. In Figure 2, we see a clear indication that models trained on one domain do not perform as well on another domain. All differences are significant at a confidence interval of 99.9%.</Paragraph>
    </Section>
    <Section position="4" start_page="43" end_page="44" type="sub_section">
      <SectionTitle>
2.4 Temporal Dependency
</SectionTitle>
      <Paragraph position="0"> To investigate the effect of time on sentiment classification, we constructed a new set of movie re- null Accuracies, in percent. Best performance on a test set for each model is highlighted in bold.</Paragraph>
      <Paragraph position="1"> views, following the same approach used by Pang et al. (2002) when they created the Polarity 1.0 dataset. The data source was the Internet Movie Review Database archive1 of movie reviews. The reviews were categorised as positive or negative using automatically extracted ratings. A review was ignored if it was not written in 2003 or 2004 (ensuring that the review was written after any in the Polarity 1.0 dataset). This procedure yielded a corpus of 716 negative and 2,669 positive reviews. To create the Polarity 20042 dataset we randomly selected 700 negative reviews and 700 positive reviews, matching the size and distribution of the Polarity 1.0 dataset. The next experiment evaluated the performance of the models first against movie reviews from the same time-period as the training set and then against reviews from the other time-period. Figure 3 shows the resulting accuracies.</Paragraph>
      <Paragraph position="2"> These results show that while the models perform well on reviews from the same time-period as the training set, they are not so effective on reviews from other time-periods (confidence interval 95%). It is also apparent that the Polarity 2004 dataset performs worse than the Polarity 1.0 dataset (confidence inter- null val 99.9%). A possible reason for this is that Polarity 2004 data is from a much smaller time-period than that represented by Polarity 1.0.</Paragraph>
      <Paragraph position="3"> 3 Sentiment Classification using  One way of overcoming the domain, topic and time problems we have demonstrated above would be to find a source of much larger and diverse amounts of general text, annotated for sentiment. Users of  observed in Usenet articles, in percent. For example, 2.435% of downloaded Usenet articles contained a wink emoticon. electronic methods of communication have developed visual cues that are associated with emotional states in an attempt to state the emotion that their text represents. These have become known as smileys or emoticons and are glyphs constructed using the characters available on a standard keyboard, representing a facial expression of emotion -- see Figure 4 for some examples. When the author of an electronic communication uses an emoticon, they are effectively marking up their own text with an emotional state. This marked-up text can be used to train a sentiment classifier if we assume that a smile indicates generally positive text and a frown indicates generally negative text.</Paragraph>
    </Section>
    <Section position="5" start_page="44" end_page="45" type="sub_section">
      <SectionTitle>
3.1 Emoticon Corpus Construction
</SectionTitle>
      <Paragraph position="0"> We collected a corpus of text marked-up with emoticons by downloading Usenet newsgroups and saving an article if it contained an emoticon listed in Figure 4. This process resulted in 766,730 articles being stored, from 10,682,455 messages in 49,759 newsgroups inspected. Figure 4 also lists the percentage of documents containing each emoticon type, as observed in the Usenet newsgroups.</Paragraph>
      <Paragraph position="1"> We automatically extracted the paragraph(s) containing the emoticon of interest (a smile or a frown) from each message and removed any superfluous formatting characters (such as those used to indicate article quotations in message threads). In order to prevent quoted text from being considered more than once, any paragraph that began with exactly the same thirty characters as a previously observed paragraph was disregarded. Finally, we used the classifier developed by Cavnar and Trenkle (1994) to filter  domains. Mean accuracies with standard deviation, in percent. out any paragraphs of non-English text. This process yielded a corpus of 13,000 article extracts containing frown emoticons. As investigating skew between positive and negative distributions is outside the scope of this work, we also extracted 13,000 article extracts containing smile emoticons. The dataset is referred to throughout this paper as Emoticons and contains 748,685 words.</Paragraph>
    </Section>
    <Section position="6" start_page="45" end_page="45" type="sub_section">
      <SectionTitle>
3.2 Emoticon-trained Sentiment Classification
</SectionTitle>
      <Paragraph position="0"> This section describes how the Emoticons corpus3 was optimised for use as sentiment classification training data. 2,000 articles containing smiles and 2,000 articles containing frowns were held-out as optimising test data. We took increasing amounts of articles from the remaining dataset (from 2,000 to 22,000 in increments of 1,000, an equal number being taken from the positive and negative sets) as optimising training data. For each set of training data we extracted a context of an increasing number of tokens (from 10 to 1,000 in increments of 10) both before and in a window4 around the smile or frown emoticon. The models were trained using this extracted context and tested on the held-out dataset.</Paragraph>
      <Paragraph position="1"> The optimisation process revealed that the best-performing settings for the Na&amp;quot;ive Bayes classifier was a window context of 130 tokens taken from the largest training set of 22,000 articles. Similarly, the best performance for the SVM classifier was found using a window context of 150 tokens taken from 3Note that in these experiments the emoticons are used as anchors from which context is extracted, but are removed from texts before they are used as training or test data.</Paragraph>
      <Paragraph position="2"> 4Context taken after an emoticon was also investigated, but was found to be inferior. This is because approximately two-thirds of article extracts end in an emoticon so when using aftercontext few features are extracted.</Paragraph>
      <Paragraph position="3">  The classifiers' performance in predicting the smiles and frowns of article extracts was verified using these optimised parameters and ten-fold crossvalidation. The mean accuracy of the Na&amp;quot;ive Bayes classifier was 61.5%, while the SVM classifier was 70.1%.</Paragraph>
      <Paragraph position="4"> Using these same classifiers to predict the sentiment of movie reviews in Polarity 1.0 resulted in accuracies of 59.1% (Na&amp;quot;ive Bayes) and 52.1% (SVM). We repeated the optimisation process using a held-out set of 100 positive and 100 negative reviews from the Polarity 1.0 dataset, as it is possible that this test needs different parameter settings. This revealed an optimum context of a window of 50 tokens taken from a training set of 21,000 articles for the Na&amp;quot;ive Bayes classifier. Interestingly, the optimum context for the SVM classifier appeared to be a window of only 20 tokens taken from a mere 2,000 training examples. This is clearly an anomaly, as these parameters resulted in an accuracy of 48.9% when testing against the reserved reviews of Polarity 1.0. We attribute this to the presence of noise, both in the training set and in the held-out set, and discuss this below (Section 4.2). The second-best parameters according to the optimisation process were a context of 510 tokens taken before an emoticon, from a training set of 20,000 examples.</Paragraph>
      <Paragraph position="5"> We used these optimised parameters to evaluate the sentiments of texts in the test sets used to evaluate dependency in Section 2. Figures 5, 6 and 7 show the final, optimised results across topics, domains and time-periods respectively. These tables report the average accuracies over three folds, with the standard deviation as a measure of error.</Paragraph>
    </Section>
  </Section>
  <Section position="5" start_page="45" end_page="47" type="metho">
    <SectionTitle>
4 Discussion
</SectionTitle>
    <Paragraph position="0"> The emoticon-trained classifiers perform well (up to 70% accuracy) when predicting the sentiment of article extracts from the Emoticons dataset, which is encouraging when one considers the high level of  noise that is likely to be present in the dataset. However, they perform only a little better than one would expect by chance when classifying movie reviews, and are not effective in predicting the sentiment of newswire articles. This is perhaps due to the nature of the datasets -- one would expect language to be informal in movie reviews, and even more so in Usenet articles. In contrast, language in newswire articles is far more formal. We might therefore infer a further type of dependence in sentiment classification, that of language-style dependency.</Paragraph>
    <Paragraph position="1"> Also, note that neither machine-learning model consistently out-performs the other. We speculate that this, and the generally mediocre performance of the classifiers, is due (at least) to two factors; poor coverage of the features found in the test domains and a high level of noise found in Usenet article extracts. We investigate these factors below.</Paragraph>
    <Section position="1" start_page="46" end_page="46" type="sub_section">
      <SectionTitle>
4.1 Coverage
</SectionTitle>
      <Paragraph position="0"> Figure 8 shows the coverage of the Emoticon-trained classifiers on the various test sets. In these experiments, we are interested in the coverage in terms of unique token types rather than the frequency of features, as this more closely reflects the training of the models (see Section 2.1). The mean coverage of the Polarity 1.0 dataset during three-fold cross-validation is also listed as an example of the coverage one would expect from a better-performing sentiment classifier. The Emoticon-trained classifier has much worse coverage in the test sets.</Paragraph>
      <Paragraph position="1"> We analysed the change in coverage of the Emoticon-trained classifiers on the Polarity 1.0 dataset. We found that the coverage continued to improve as more training data was provided; the coverage of unique token types was improving by about  held-out reviews from Polarity 1.0, varying training set size and window context size. The datapoints represent 2,200 experiments in total.</Paragraph>
      <Paragraph position="2"> cons dataset was exhausted.</Paragraph>
      <Paragraph position="3"> It appears possible that more training data will improve the performance of the Emoticon-trained classifiers by increasing the coverage. Potential sources for this include online bulletin boards, chat forums, and further newsgroup data from Usenet and Google Groups5. Future work will utilise these sources to collect more examples of emoticon use and analyse any improvement in coverage and accuracy.</Paragraph>
    </Section>
    <Section position="2" start_page="46" end_page="47" type="sub_section">
      <SectionTitle>
4.2 Noise in Usenet Article Extracts
</SectionTitle>
      <Paragraph position="0"> The article extracts collected in the Emoticons dataset may be noisy with respect to sentiment. The SVM classifier seems particularly affected by this noise. Figure 9 depicts the change in performance of the SVM classifier when varying the training set size and size of context extracted. There are significant spikes apparent for the training sizes of 2,000, 3,000 and 6,000 article extracts (as noted in Section 3.2), where the accuracy suddenly increases for the training set size, then quickly decreases for the next set size. This implies that the classifier is discovering features that are useful in classifying the held-out set, but the addition of more, noisy, texts soon makes the information redundant.</Paragraph>
      <Paragraph position="1"> Some examples of noise taken from the Emoticons dataset are: mixed sentiment, e.g.</Paragraph>
      <Paragraph position="2">  &amp;quot;Sorry about venting my frustration here but I just lost it. :-( Happy thanks giving everybody :-)&amp;quot;, sarcasm, e.g.</Paragraph>
      <Paragraph position="3"> &amp;quot;Thank you so much, that's really encouraging :-(&amp;quot;, and spelling mistakes, e.g.</Paragraph>
      <Paragraph position="4"> &amp;quot;The movies where for me a major desapointment :-(&amp;quot;.</Paragraph>
      <Paragraph position="5"> In future work we will investigate ways to remove noisy data from the Emoticons dataset.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML