What are the different types of reasonings?

One type of reasoning is straightforward where you been given an input and you predict or classify the input and generate your desired output. Another more complicated reasoning are those that requires multiple steps of reasonings. For example, to answer a user’s query, it requires multiple steps of first retrieving relevant documents, selecting the top documents, and identify the answer spans from the selected documents.

What is a knowledge base?

Knowledge base is a graph network where nodes are different entities and edges between nodes are relations between different entities. It is built using multiple information sources.

What is an “open schema” knowledge base?

Many knowledge bases have predefined schema or categories that they group entities into. However, sometimes these schema or categories are too fine-grained or too generic. Same can be said about the relations between entities as there might be more than one relations. Open schema knowledge base is the ideology that we want to build a knowledge base that has flexibility in the schema or categories while preserving the entity-relation structure.

How to build a knowledge base?

  1. Find the entity mentions within the source text
  2. Use entity resolutions and entity linking to link all the entity mentions together
  3. Use your predefined schema to categories your entity mentions into different entity types and your entity pairs into different relation types

What is multi-hop reasoning?

When querying the knowledge base to determine the new unseen relations between two entities, it requires the knowledge base to perform multiple-step reasoning. For example, to determine the new unseen relations between Melinda and Seattle, you can walk through multiple-steps of paths where we know that Melinda is the spouse of Bill, and Bill is the CEO of Microsoft, and Microsoft is located in Seattle, and so the relation between Melinda and Seattle can be predicted as “lives in”.

There are many methods to perform multi-hop reasoning. There are the symbolic method as described above and the embeddings method where we feed the near relations and entities into a deep neural network and try to output a relation vector between the targeted entities.

What’s the problem with word embeddings / vector representation?

They represents each word as more of like a point, which can be misleading. For example, the word “rabbit” and “mammal” are both represented by a single point / vector yet the word “mammal” are more of a broad term. This means that the vector representation doesn’t necessarily represent region or category. In addition, the typical operation performed between vectors are dot product which isn’t asymmetrical.

How do we alleviate the problems defined in the previous question?

Instead of representing words using a point / vector, one possible way is to use gaussian representation to represent words where the word “mammal” has a wider region that encapsulates different animals that belong to that category and the word “rabbit” has a smaller region. However, gaussian representation aren’t close under intersection meaning that two Gaussians intersecting each other does not equal to a Gaussian.

That leads us to using cone representation. Here, words are represented by points but that points are spread into a region as shown in the figure below. However, the cone representation are disjoint meaning that as your vocabulary grows, eventually most words that shouldn’t be intersecting would be intersecting each other.

What is a box representation?

The gaussian and cone representations both have their weaknesses. The box representation aims to cover all four aspects of region, asymmetry, disjointness, and closed under intersection. The box representation associate each concept with an n-dimensional box.

Ryan

Ryan

Data Scientist

Leave a Reply