Embedding with Auxiliary Information

Multi-modal embeddings involve incorporating different external information such as text descriptions, type of constraints, relational paths, and so on with a knowledge graph to build a more effective knowledge representation. Here are three different auxiliary information:

  1. Textual description

  2. Type information

  3. Visual information

Textual Description

Each entity in the knowledge graph has textual information. The challenge here is to embed for the structured and unstructured knowledge into the same space. There are research work on alignment models for aligning entity space and word space by using entity names and Wikipedia anchors.

Type Information

Entities are classified into different hierarchical classes or types and subsequently, relations with semantic types. For example, SSE incorporates semantic categories of entities into entities embeddings so that entities belonging to the same category would appear close to each other in the semantic space.

Visual Information

Images related to the entities could be use to enrich knowledge representation learning.

Check out Wang et al. for more detailed review of the different kinds of auxiliary information! Below are the different KRL models!

Overall, to develop a novel KRL model, you need to answer the 4 questions:

  1. Which representation space to choose?

  2. How to measure the plausibility of triples in specific space?

  3. What encoding model to model the relational interaction?

  4. Should we utilise auxiliary information? If so, how?



Data Scientist

Leave a Reply