Paper Discussion: Ding et al. (2008)

A Holistic Lexicon-Based Approach to Opinion Mining

- Xiaowen Ding, Bing Liu, and Philip S. Yu

Characteristics Summary
Domain Product reviews
Sentiment Classes Positive / Negative
Aspect Detection Method N/A
Sentiment Analysis Method Rule-based
Sentiment Lexicon Custom WordNet propagation algorithm
Performance Summary
Opinion sentence extraction and orientation prediction
Precision: 0.91 Recall: 0.90 F-score: 0.90

This research is focused on the sentiment analysis of sentences in product reviews. It aims to provide a more sophisticated way of determining the sentiment than the simple counting method presented in Hu & Liu (2004). One of the major contributions of the proposed method is a way to properly deal with context-dependent opinion words. For example, the word "long" can be positive when describing the battery, but negative when describing the time it takes to learn how to use a product. Another contribution is a formalization of the problem of aspect-level sentiment analysis. Because this system, somehow also called Opinion Observer, is tested on the same dataset as Hu & Liu (2004) and Popescu & Etzioni (2005), it is nicely compared to both of them by the authors.

Problem Formalization

An object O is defined as an entity which can be a product, person, event, organization, or topic. This object is associated with a pair (T, A), where T is a hierarchy or taxonomy of components or parts, sub-components etcetera, and A is a set of attributes of O. Together this forms a tree where the root node is the object itself, and every non-root node is a component of its parent node (so either a component of O, or a sub-component). Every node in the tree is associated with its own set of attributes A. An opinion therefore, can be about any node or attribute in this tree. To simplify this, both nodes in the tree and the set of attributes associated with each node is collectively referred to as the set of aspects. This makes the root of the tree some sort of special aspect which denotes the product as a whole.

  • Aspects can be either explicit or implicit. They are explicit when the aspect itself is mentioned in the sentence, otherwise it is implied. In case of the latter, the word(s) that point to the actual aspect are called aspect indicator(s).
  • The group of consecutive sentences that express some opinion on an aspect is called the opinion passage. In this research it is always a sentence (or more than one), but the definition could be extended to work with phrases instead of sentences.
  • Opinions can be either explicit or implicit. They are explicit when the opinion is directly expressed in the sentence (using some opinion bearing words), while the opinion is implied when even though no sentiment words are used, the sentence still conveys a negative message. The latter is usually the case when (un)desirable facts are presented. The sentence simply states a fact, but the reader interprets that as positive or negative since the fact is not normal but a positive/negative deviation from what is expected. An example given in this paper is "The earphone broke in two days", which is a negative deviation from what is expected from an earphone. As can be seen, one needs additional knowledge to detect implied opinions.
  • The opinion holder is defined as the person or organization that holds the opinion. Usually this is the writer of the text, but in some domains, like news items, an opinion holder is explicitly mentioned  together with the opinion. "The parliament thinks ...", "According to the president it is ...", etc.

For this research, it is assumed the set of aspects, and the set of synonyms for each aspect is known beforehand. Furthermore, a set of opinion words is generated from WordNet using a sentiment propagation technique. The list of adjectives from Hu & Liu (2004) is used, and complemented with a similar list for verbs and nouns. Furthermore, a list of context dependent opinion words is compiled along with a list of over 1000 idioms that convey sentiment.

Aggregating Sentiment per Aspect

For each sentence that contains one or more aspect words, the opinion words are identified first. For each aspect in the sentence, the orientation score is computed using the following formula.


In this formula, SO denotes the semantic orientation of word wi as either +1 or -1. This polarity score is corrected for the distance between the sentiment word wi and the aspect word f. Note that this implies that each sentiment word is used for each aspect in the sentence. The algorithm relies on the distance correction to more or less filter out opinion words that are not related to the aspect.

Also important are a set of rules that cover some simple negation patterns in sentences and a rule that covers the use of "but"-clauses. The latter states that if an aspect is within a "but"-clause, only the sentiment within that clause is used, and when no sentiment is found within the clause, the negation of the sentiment before the "but"-clause is used.

Handling Context Dependent Opinions

To deal with context dependent opinions, not only local but also global information is needed. This is reflected in the use of three rules.

  1. Intra-sentence conjunction rule: When a context dependent opinion word wc is used for some aspect in conjunction with a known opinion word wk, we know that wc has the same orientation as wk. Example from paper: "This camera takes great pictures and has a long battery life". Since "great" is a known positive word, "long" for "battery life" is also positive.
  2. Pseudo intra-sentence conjunction rule: When instead of an explicit conjunction, a similar sentence construction is used, the same rule can be applied. Example from paper: "The camera has a long battery life, which is great." This rule probably relies on the detection of "but"-clauses, since these would render this rule incorrect.
  3. Inter-sentence conjunction rule: This rule is used when the previous two cannot provide an orientation prediction. Following the intuition that people keep writing with the same sentiment orientation unless some polarity reversing words are used, it basically states that the current sentences has the same orientation as the previous sentence. Or, if that sentence has no polarity either, it can use the next sentences instead of the previous one to copy the polarity from.

Furthermore, words with "too" in front are always regarded as negative, and if some pair of sentiment word and aspect word is known, then the antonyms and synonyms of the sentiment word are also known for that particular aspect. As a reference, here is the pseudocode from the paper.



The evaluation is performed on the same dataset as previous works. In the following table the results are split out for the various products in the dataset. Note that the first five products are in the original dataset, while the lower three are added. Thus, the numbers in the next table, showing the results for the three different systems (FBS, OPINE, and Opinion Observer), are computed over only the first five products.

In the first table, you can also see the influence of the two major contributions this research has made. The first column is the complete system, then the second column shows the result for the system without the rules covering the context dependent opinion words, and the third column shows the results without the more sophisticated formula to aggregate polarity scores within a sentence. The final column is the original Hu & Liu (2004) system, with the exception that it now also computes sentiment for implicit features (this was not the case in the original system as implicit features were not detected).




This research has shown some interesting solutions for aggregating sentiment word polarity scores within sentences, and a way to deal with context specific sentiment words. Although the research also features a way to compute sentiment scores for implicit features, this is slightly less useful as especially the detection of these features is most challenging. Curiously enough, a previous system was also called Opinion Observer (the second author of this paper is the first author of that paper), but that system specifically focused on finding aspects and did not perform any sentiment analysis (note that this system does exactly the opposite).

The evaluation scores are really impressive, with an F-score of over 90%. With the detailed pseudocode, one would expect that reproducing this system is very well possible. This is indeed roughly the case. The only thing missing is the list of idioms that are annotated for sentiment. The paper states "We annotated more than 1000 idioms. Although this task is time consuming, it is only a one-time effort and the annotated idioms can be used by the community". A good idea, but I could not find this list anywhere. If you do, please let me know of course.

Leave a Reply

Your email address will not be published. Required fields are marked *