Paper Discussion: Choi & Cardie (2008)


Learning with Compositional Semantics as Structural Inference for Subsentential Sentiment Analysis

- Yejin Choi and Claire Cardie

Characteristics Summary
Domain MPQA corpus
Sentiment Classes Positive / Negative
Aspect Detection Method N/A
Sentiment Analysis Method Support Vector Machine (MIRA) with compositional inference
Sentiment Lexicon Wilson et al. (2005) + General Inquirer
Performance Summary
Polarity Prediction Accuracy: 90.7
Introduction

This research is mainly concerned with finding a structural way of dealing with sentiment interactions within a sentences. The following examples are given in the paper to illustrate the need for an algorithm that can handle these interactions.

  1. [I did [not]¬ have any [doubt]- about it.]+
  2. [The report [eliminated]¬ my [doubt]-.]+
  3. [They could [not]¬ [eliminate]¬ my [doubt]-.]-

As you can see, the sentence sentiment is constructed based on the sentiment of its constituents and the way they are combined. This is called the principle of compositionality. Simply taking the majority vote among the constituents to get the sentence sentiment will not be of great help in situations like the ones above. This research aims to tackle this problem by first assessing the polarity of the small units, and then, using a relatively simple set of rules, combining the polarities into one sentence score.

Method

While several simpler models are presented first, that function as a baseline for the evaluation, I'll just give an overview of the main method that the authors propose. It is based on a Support Vector Machine, but instead of estimating y (i.e., the polarity) directly from the set of features x, an intermediate set of variables z is introduced with z_i \in \{positive, negative, negator, none\}. Two simplifying assumptions apply here:

  1. Each intermediate decision variable z_i is determined independently from any other variable z_j.
  2. Each intermediate decision variable z_i depends only on input x

Once the intermediate variables are determined, a set of heuristic rules is applied to get the final polarity y of x. In other words: y = C(x,\textbf{z}), with C being the compositional inference rule set (see Figure 1). Because of C, direct training of the SVM is not possible. Therefore, the SVM is trained (see Figure 2) using a soft gold standard on z, which can be automatically created (see Figure 3). The automatically created value can, however, be updated by the learner when needed.

Figure 1: The various compositional inference rules, for two flavors (CompoMC and CompoPR).
ChoiCardie-CompositionalInferenceRules
Figure 2: The SVM update rules, for both the default SVM and the one with compositional inference.
ChoiCardie-SVMUpdateMethod
Figure 3: Pseudocode for the creation of the soft golden standard.
ChoiCardie-SoftGoldenStandard

The features for the SVM consist of

  • lexical features
    • word
    • lemma
    • stopword or not
  • dictionary features
    • word categories from General Inquirer dictionary for each non-stopword
    • whether it is a function-word negator
    • whether it is a content-word negator
    • whether it is a negator of any kind
    • its polarity according to Wilson et al.'s dictionary
    • its polarity according to the dictionary derived from the General Inquirer lexicon
    • conjunction of the above two features
  • vote feature (the number of content-word and function-word negators)
Evaluation

The evaluation is performed using the MPQA corpus, consisting of 535 news items, annotated with subjectivity information on the phrase level. This is also the reason this paper is in the scope of aspect-level sentiment analysis: the phrase level is very fine-grained and will in most cases coincide with the aspect level. As the detection of neither aspects nor phrases is the main topic of this paper, the authors simply use the phrase boundaries of MPQA as a given. Only the sentiment-bearing phrases with an intensity of 'medium' or higher are used. Performance is reported using tenfold cross-validation on 400 documents. The remaining 135 documents were used as a development set. The results are shown in the tables below:

ChoiCardie-Results

The learning-based methods employ an SVM, while the heuristic-based approaches only use rules. VOTE is a simple majority vote among all sentiment words, while the four approaches to its right increasingly use negations to improve the results. The two COMPO methods are the presented compositional inference rule sets. The proposed methods are the two columns on the far right.

The first table reports the results using the given phrase boundaries, while the second table relaxes these boundaries a bit to see whether additional context could benefit the prediction. Interestingly, this is not the case. Additional context, outside of the actual phrase is only detrimental for the prediction.

Discussion

This paper contributes to bridge the gap between machine learning and compositional semantics, and it does that by injecting the ideas of compositionality directly in the machine learning method. That is a very interesting thought, and it is worked out well, with lots of comparison material. Future research would include a way to learn the compositional inference rules from data as manually crafted rules are usually too rigid to to cope with all the complexities of language.

 

Leave a Reply

Your email address will not be published. Required fields are marked *