21.3 Creating Affect Lexicons by Human Labeling

How was EmoLex annotated?

The emotions were labelled in two steps. The first step is to answer a MCQ synonym question to prime the annotators to the correct sense of the word. The second step is to score how associated the word is with each of the 8 emotions. The outlier ratings were removed and the emotion class is chosen by the majority votes of the annotators.

How was NRC VAD lexicon annotated?

The words are annotated using the best-worst scaling. The best-worst scaling is where annotators are given N items and are asked to select which item is the best and which is the worst. The score for each word is computed by taking the proportion of times the item was chosen as the best minus the proportion of times the item was chosen as the worst. The agreement between annotations are evaluated using the split-half reliability, where you split the corpus in half and compute separate correlations between the annotations.

21.4 Semi-supervised Induction of Affect Lexicons

What are the two families of seed-based semi-supervised lexicon induction algorithm?
  1. Axis-based. Semantic axis

  2. Graph-based. Label propagation

What is the semantic axis method?
  1. Choose seed words by hand. We would define a set of seed words for different domains

  2. Compute embeddings for each of the pole words. Fine-tuning the embeddings if dealing with specific domain of texts

  3. Create an embedding that represents each pole (positive and negative). This is done so by taking the centroid (mean) of the embeddings of each seed words

  4. Semantic axis is computed by taking the positive pole vector minus the negative pole vector. The semantic axis vector is in the direction of sentiment

  5. Compute how close each word is to the sentiment axis using cosine similarity. A high cosine similarity means the word is more aligned to positive pole

What is the label propagation method?

The label propagation method propagates sentiment labels on graph. The SentProp algorithm has five steps:

  1. Build a weighted graph by connecting each word with its k nearest neighbours. The edge is computed using cosine similarity between the word embeddings

  2. Choose positive and negative seed words

  3. Propagate polarities from the seed words. Run random walks, starting from the seed set and move to another node based on probability proportional to the edge probability. A word’s polarity score is proportional to the probability that the random walk lands on that word

  4. Compute the word scores by combining both the positive and negative label scores

  5. Assign confidence level to each score using bootstrap-sampling

What is the core component of semi-supervised algorithms for annotation?

The core component that drives the performance of semi-supervised algorithms is the similarity metric. Both the examples above use embedding cosine similarity. Alternative measures of similarity involves syntactic cues where two adjectives are considered to be similar if they were frequently combined with “and” and rarely with “but”. Another cue uses morphological negation where adjectives with the same root but differing in a morphological negative tend to have opposite polarity (example: equate vs inadequate).

How to use WordNet to find similar polarity words?

The WordNet is a thesaurus with synonyms and antonyms. We can use WordNet to find the synonyms and antonyms of the seed words as the synonyms are highly likely to have similar polarity and the antonyms have opposite polarity. This method allows us to expand our lexicon.



Data Scientist

Leave a Reply