23.3 Centering and Entity-Based Coherence

What is the Centering Theory?

It is the theory that at any given point in the discourse, one of the entities in the discourse model is centered, meaning it’s the main focus of the discourse. It is also a theory that states that discourses where adjacent sentences maintain the same salient entity are more coherent than those that changes frequently in salient entities.

What is the backward and forward-looking centers?

The centering theory maintains two representations for each utterance. The backward-looking center representation encode the current salient entity using the utterances that has already been interpreted whereas the forward-looking center representation focuses on a set of potential future salient entities that could come after the utterance. The set of future salient entities are ranked based on discourse salience and grammatical role.

What are the four intersentential relationships between pairs of utterances?
  1. Continue

  2. Retain

  3. Smooth-shift

  4. Rough-shift

These relationships were defined by an algorithm for centering and it depends on the current salient entity, the future salient entities, and the preferred entity. The algorithm has two rules as shown below.

Rule 1 is a common way to mark discourse salience (pronomalisation) whereas rule 2 captures the intuition that adjacent sentences that surround the same entity are more coherent. The transition table allows us to determine whether the discourse has been continuing talking about the same entity or it has shifted to other entities.

What is an entity grid?

It is an alternative method to capture entity-coherence by using machine learning to learn the patterns of entity mentioning that allows it to determine whether a discourse is more coherent. An entity grid represents the distribution of entity mentions across sentences, where the rows represent the sentences and the columns represent the discourse entities. Each cell represents the possible appearance of an entity in a sentence with fine-grained grammatical roles: subject (S), object (o), neither (x), or absent (-). This is shown in the figure below.

To build the entity grid, we need to do NER and coreference resolution as well as POS tagging to get grammatical roles. Coherence in an entity grid is measured by patterns of local entity transition and the probability of those patterns. These transitions and probabilities can be used as features for ML models that’s trained to produce coherence scores.

How can we evaluate the entity-based and neural coherence models?
  1. Human evaluation (expensive but the best)

  2. Natural texts to do self-supervision – pair up natural discourse with a pseudo-document (changing the ordering of sentences). A successful coherence algorithm should prefer the original natural discourse

There are 3 implementations to self-supervision:

  1. Sentence order discrimination – a model is correct if it ranks the original document more coherent

  2. Sentence insertion – select and remove one of the sentences from the original. Copy that one sentence n – 1 times and insert the sentence into each position in the original discourse. The task is to decide which of the n documents contains the original ordering

  3. Sentence order reconstruction – Randomise the sentence ordering and train the model to put reconstruct the original document

Ryan

Ryan

Data Scientist

Leave a Reply