Factorizing YAGO: scalable machine learning for linked data
Proposed tensor factorisation method to relational learning on Linked Open Data (LOD). Ontological knowledge can be incorporated into the factorisation to improve learning results. The methodology was able to factorise YAGO 2 ontology and globally predict statements for large knowledge base using a single dualcore desktop computer. This was possible due to a novel approach of taking advantage of the sparsity of LOD data when performing factorisation. The pare also extended RESCAL.
Future Work

Reduce the effective sparsity of a tensor representation by adding typed relations to factorisation

Using efficient methods to find good parameters value as doing crossvalidation on largescale data is costly! There are scalable Bayesian methods that could potentially help with this
Reasoning with neural tensor networks for knowledge base completion
Introduced Neural Tensor Network (NTN) for reasoning over relationships between two entities. The contributions are as follows:

Proposed NTN for modelling relational data and also it generalises few previous neural network models

Introduced a new way to represent entities, where we represent each entity as the average of its word vectors, capturing the sharing of words between entities

Showcase that models improve when word vectors are initialised with vectors learned from large unsupervised corpus
Future Work

Scaling the number of slices based on available training data for each relation

Extend the paper’s ideas to reason over free text