I am very new to knowledge graphs and find the area really fascinating! One thing that I came across was translate model. I decided to dive deeper and learn more about it and found this good medium article that summarises translate model for knowledge graph embeddings.

What are knowledge graph embeddings?

Knowledge graph embeddings are like word embeddings except it’s used to represent semantic triples, allowing us to capture relations that are close to each other.

What are some of the applications of knowledge graph?
  1. Link prediction

  2. Recommender systems

Link prediction is the task of predicting an entity that has a specific relation with another given entity.

What is Trans-E?

TransE stands for Translating Embeddings for Modelling Multi-relational. The idea behind this is to make the sum of the head and relation vector as close to the tail vector. We can use L1 or L2 norm to measure how close they are. However, this model only takes care of one-to-one relation and so it’s not suited for one-to-many or many-to-one relationships.

What is Trans-H?

This is where TransH comes in. TransH is Knowledge Graph Embedding by Translating on Hyperplanes and it can handle one-to-many / many-to-one / many-to-many relationships while do not increase the mode of complexity and training. Here, each relation has two vectors, the norm vector and the translation vector. The figure below showcase the difference between TransE and TransH.

We first project the head and tail vector to the hyperplane to get a new head and tail vector. In this hyperplane, we train a relation translation vector where the sum of head vector and relation translation vector is similar to the tail vector. We essentially decompose the head and tail vector into two parts and only use one part to train the model, to avoid two entity get close!

What is Trans-R?

TransR is Learning Entity and Relation Embeddings for Knowledge Graph Completion. Both TransE and TransH assume entity and relation are in the same semantic space. However, each entity have many aspects and different relation pays attention to different aspects. TransR models entities and relations in two distinct space: entity space and multiple relation spaces and performs translation in corresponding relation space. The figure below illustrates TransR.

For each triple (h, r, t), the head and tail entities in the entity space are projected into the r-relation space as hr and tr using the Mr matrix. The hr + r should be approximately the tr. This relation-specific projection can make the head and tail entities that actually hold the r relation be close to each other and entities who does not hold the r relation to be far from each other.

The weakness of TransR is that:

  1. The head and tail entities use the same transformation matrix to project themselves to the relation space but the head and tail entities are usually different. Therefore, the transformation should be different

  2. The projection is related to entity and relation but the projection matrix is only determined by the relation

  3. TransR has more parameters than TransE and TransH, making it harder to apply to large-scale KGs

What is Trans-D?

TransD is Knowledge Graph Embedding via Dynamic Mapping Matrix. It’s the improved version of TransR. In TransD, each entity and relation has two vectors:

  1. Vector that represents the meaning of an entity or a relation

  2. A projection vector used to construct mapping matrices

This means that we would generate specific transformation matrix using the specific entity and relation projection vectors.

Ryan

Ryan

Data Scientist

Leave a Reply