Cooperative denoising for distantly supervised relation extraction

Proposed CORD, a COopeRative Denoising framework, that consists of two base networks that leverage text corpus (Corpus-Net) and knowledge graph (KG-Net) information. Both networks are modelled using GRU networks to predict relations using word-sequence and entity-sequence respectively. The key insight is that the base networks can learn complementary information from training on different sources, therefore the cooperative learning can benefit from complementary of different expressions of the same relational fact. The contributions are:

  1. Explored the feasibility of distantly supervised relation extraction using different sources of information cooperatively

  2. Devise a bi-directional knowledge distillation mechanism to enhance base network

  3. Design an adaptive imitation rate setting and dynamic ensemble strategy to guide training procedure and help predict noisy-varying instances

Hybrid attention-based prototypical networks for noisy few-shot relation classification

Proposed hybrid attention-based prototypical networks for noisy few-shot relation extraction. There are few steps:

  1. Use neural networks to embed all instances in a support set and compute a feature vector for each relation via these instance embeddings

  2. Measure the distance between the query instance embedding and relation prototypes to classify the relation between the mentioned entity pair

Our model is designed to alleviate the problem of noisy data and sparse features which are commonly found in relation extraction datasets.

Future Work

  1. Incorporate our hybrid attention schemes with other few-shot learning models and adopt more neural encoders to better generalise our model

One-shot relational learning for knowledge graphs

The first to formulate the long-tail relations in link prediction task as few-shot relational learning. Proposed a one-shot learning framework for relational data that uses knowledge embeddings and learns a matching metric by considering both learned embeddings and one-hop graph structures. The paper also presented two new datasets for one-shot knowledge graph completion.

Future Work

  1. Consider incorporating external text data to enhance our model to make better use of multiple training examples in few-shot learning

Ryan

Ryan

Data Scientist

Leave a Reply