Fake news! Fake news is increasingly becoming an important problem in today’s world as NLP technologies continue to advance further and further. I thought it would be a good idea to start exploring the space and learn more about it 🙂 As usual, you always start with Google.

What is fake news?

Fake news is any news that is either factually wrong, misrepresents the facts, and/or that spreads virally. The challenge with fake news is that it mimics the style and patterns of real news, making it challenging to distinguish. Neural fake news is any fake news that has been generated by neural networks.

How to detect neural fake news?

There are three main methods:

  1. Fact-Checking

  2. Statistical Analysis using GLTR (HarvardNLP)

  3. ML models

Describe fact checking.

This is the common sense step. When dealing with news, consider the source, check for any biases and supporting sources, is this news recommended by trustworthy sites? This is a simple method and it’s quite effective but it doesn’t go well against machine generated text.

Describe GLTR.

GLTR stands for Giant Language model Test Room. Below is the figure of the UI. GLTR’s main idea is to detect generated text using the same (or similar) model that was used to generate the fake news in the first place. The rationale behind this is that the words that a language model generates come from the probability distribution that it has learnt from training data. This means that if we sample the words from the news articles and it follows a distribution similar to the language model, then we can consider it to be machine-generated news. The typical flow is as follows:

  1. You feed text into the GLTR

  2. GLTR will take this input and analyse what a language model would have predicted for each position of the input

  3. Compare the input with what the language model has generated. We analyse the top 10 ranked words at each time step. If it matches mostly with the input, then it’s likely to be machine-generated, otherwise, we can safely classify it as human-generated

As shown in the figure above, GLTR has three different histograms. The first one (left) shows how many words of each category appear in the text. The second (middle) shows the ratio between the probability of the top predicted word and the following word. The third (right) shows the distribution over the entropies of the predictions. Overall, the first two histograms will allow you to see if the words in the input are sampled from the top of the distribution (which will be true if input is machine-generated). The last histogram tells you whether the context words are well-known or rare to our model.

Describe GPT-2 Detector model.

We treat fake news as a classification problem of whether the input text has been generated or not. The GPT-2 detector model is a RoBERTa model fine-tuned to solve the classification task. It has a very high accuracy performance and it’s extremely fast, however, we noticed that it tends to only work well with text generated by GPT-2.

Describe Grover.

Grover is a tool by AllenNLP. It is able to identify fake machine-generated texts by many different language models (unlike GPT-2 detector model). The idea behind Grover is that in order to build a model that can detect fake news really well, the model itself must be good at generating fake news. The setup for Grover is as follows:

  1. There are two models to detect generated text

  2. The adversarial model is used to generate fake news

  3. The verifier model is used to classify whether a given text is real or fake. The training data is high imbalance towards real news to imitate the real-life situation where fake news are generally less common

Grover has the same architecture as GPT-2 and is trained on RealNews dataset.

Ryan

Ryan

Data Scientist

Leave a Reply