As discussed in our previous post, there are four aspects of knowledge representation learning (KRL):

Representation space – how entities and relations are represented

Scoring function – measures plausibility of factual triples

Encoding models – representing and learning relational interactions

Auxiliary information – additional information to be added into the embedding methods
Representation space
The goal here is to tackle how entities and relations are represented. Current research focuses on four different representation methods as shown below.

Pointwise space

Complex vector space

Gaussian space

Manifold space
Pointwise space
This is the most common method. TransE represents entities and relations in the same vector space and embeddings follow the translational principle of head entity + relation = tail entity. TransR expands TransE by projecting entities and relations into different vector spaces. You can project entities into the relation space using a projection matrix.
Complex vector space
Entities and relations are represented in a complex space instead. For example, the head entity has a real part and an imaginary part where h = Real(h) + xImaginary(h). Hermitian dot product is used to do composition for relation, head, and the conjugate of tail.
Gaussian space
Knowledge graph to embeddings uses Gaussian distribution to deal with the uncertainties of entities and relations, where the mean vector indicates entities and relations’ positions and the covariance matrix models the uncertainties.
Manifold space
A manifold is a topological space, which could be defined as a set of points with neighbourhoods by the set theory.
Scoring function
There are two types of scoring functions to measure the plausibility of a fact:

Distancebased – measures distance between entities using addictive translation with relations

Similaritybased – uses semantic matching and multiplicative formulation to translate head entity near the tail entity in the representation space