Latent feature models is the most common approach in link prediction task and it’s currently the approach that drives the SOTA performance. Most latent feature models are evaluated on benchmark datasets derived from public knowledge bases, where the datasets contain facts in the form of Subject, Relation, Object (SRO). However, most large knowledge bases does not support SRO triples or any extensions of properties (for dynamic graphs). This means that all the latent feature models couldn’t fully utilise the available information. Currently, no framework exists that support both standard and dynamic models.

How does it work?

Most latent feature models followed a general architecture:

  1. Represent entities in low dimensional vector space

  2. Use relations as translations in the vector space

  3. Use a scoring function to compute a score that captures the closeness between transformed embeddings. The function can be based on distance or similarity

Because of this common general architecture that most latent feature models follow, we can easily distinguish between them using the scoring function. Latent feature models are grouped into three categories based on their features:

  1. Matrix factorisation

  2. Geometric

  3. Deep learning

Examples of latent feature models

  1. RESCAL

  2. DistMult

  3. TuckER

  4. ConvE

Why negative sampling is important in latent feature models and what are the common sampling strategies?

Knowledge base only contains positive examples and so we would need to have a method that generates negative examples. This negative sampling is extremely important in training a latent feature model that can generalise well on unseen data!

Here’s a list of sampling strategies:

  1. 1-to-N sampling – this method involves taking positive example triplets and corrupting it by randomly switching out the head or tail entity. This switching method is a topic of research

  2. N-to-All sampling – Given a tuple of (head entity, relation), pair it with all the entities in your entity set and calculate the loss using triples in the Fact set as true labels

Ryan

Ryan

Data Scientist

Leave a Reply