A latent factor model for highly multi-relational data

Proposed a latent factor model for multi-relational datasets with possibly thousands of relations. This paper also applies the method to an NLP application to showcase scalability and ability to capture semantic representations. There are many challenges with relational data. Firstly, there’s an imbalanceness between different relation types. Some are more common than the others which means more data points. Secondly, data is typically incomplete and noisy. Lastly, datasets are extremely large. The proposed model is probabilistic which can explicitly model the uncertainties in data.

TuckER: Tensor factorization for knowledge graph completion

Proposed TuckER, a linear model based on Tucker decomposition of the binary tensor representation of triplets for link prediction. TuckER is fully expressive, has sufficient bounds on its embedding dimensionalities and subsumes few previously introduced linear models, which can be viewed as special cases of TuckER. TuckER is also scalable as the number of parameters grow linearly with respect to number of entities and relations.

Future Work

  1. Explore how to incorporate background knowledge on different relation properties into the model

LowFER: Low-rank bilinear pooling for link prediction

Proposed LowFER, a factorised bilinear pooling model to better fuse entities and relations together, to create a more efficient and constraint-free model. Our model is fully expressive, provide bounds on the dimensionality of the embedding, and factorisation rank. Our model generalises Tucker decomposition based TuckER model, providing a unified framework for linear models in KGC.

Future Work

  1. Harder relations are still difficult to capture

  2. Study the trade-off between parameters sharing and constraints is important. Parameter sharing tends to perform better from modelling perspective but still limited in learning difficult relations



Data Scientist

Leave a Reply