BERT Fake News Classifier

Here, I repeat the process for the fake news task where I am using HuggingFace and PyTorch to train BERT. Note that the codes were ran on Google Colab.

We have collected three datasets: one use for training and validation, the other two use for evaluation.

The training process:

  1. Load the data
  2. Simple data analysis
  3. Fill all missing values with empty strings
  4. Concatenate the title and the body of the news article
  5. Initialise a TfidfTransformer and CountVectorizer
  6. Fit the CountVectorizer using the training data and transform training data
  7. Fit and Transform the count matrix into tfidf using TfidfTransformer
  8. Train Val split the training data
  9. Train three ML models
  10. Evaluate the ML models using validation set
  11. Retrain the best performing model with all the data
  12. Evaluate the final model on two other news datasets

The main output of this pipeline is a trained BERT fake news classifier!

Mount the drive

In [ ]:
from google.colab import drive
drive.mount('/content/gdrive')

Import Dependencies + Read Data

In [ ]:
import os
import re
from tqdm import tqdm
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

%matplotlib inline
In [81]:
root_dir = "/content/gdrive/My Drive/"
In [38]:
train_df = pd.read_csv(root_dir + 'fake-news/train.csv')
test_df = pd.read_csv(root_dir + 'fake-news/test.csv')
In [39]:
train_df.head()
Out[39]:
id title author text label
0 0 House Dem Aide: We Didn’t Even See Comey’s Let… Darrell Lucus House Dem Aide: We Didn’t Even See Comey’s Let… 1
1 1 FLYNN: Hillary Clinton, Big Woman on Campus – … Daniel J. Flynn Ever get the feeling your life circles the rou… 0
2 2 Why the Truth Might Get You Fired Consortiumnews.com Why the Truth Might Get You Fired October 29, … 1
3 3 15 Civilians Killed In Single US Airstrike Hav… Jessica Purkiss Videos 15 Civilians Killed In Single US Airstr… 1
4 4 Iranian woman jailed for fictional unpublished… Howard Portnoy Print \nAn Iranian woman has been sentenced to… 1
In [40]:
train_df['label'].value_counts()
Out[40]:
1    10413
0    10387
Name: label, dtype: int64
In [47]:
len(train_df)
Out[47]:
20800

Data Processing + Train Val Split

In [41]:
# data prep
train_df = train_df.fillna(' ')
test_df = test_df.fillna(' ')
In [42]:
train_df['combined'] = train_df['title'] + ' ' + train_df['text']
In [43]:
from sklearn.model_selection import train_test_split

X_train, X_val, y_train, y_val = train_test_split(train_df['combined'], train_df['label'], test_size = 0.2, random_state = 123)

Set up PyTorch + Install Dependencies

In [44]:
import torch

device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
In [45]:
torch.cuda.get_device_name()
Out[45]:
'Tesla K80'
In [14]:
# !pip install transformers

Initialise Bert Tokeniser and Text Processing

In [46]:
from transformers import BertTokenizer

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case = True)
In [49]:
def bert_preprocessing(data):

  # Initialise empty arrays
  input_ids = []
  attention_masks = []

  # Encode_plus with above processing
  for sent in data:
    encoded_sent = tokenizer.encode_plus(
        text = sent,
        add_special_tokens = True,
        max_length = MAX_LEN,
        pad_to_max_length = True,
        return_attention_mask = True,
        truncation = True
    )

    input_ids.append(encoded_sent.get('input_ids'))
    attention_masks.append(encoded_sent.get('attention_mask'))
  
  # Convert list to tensors
  input_ids = torch.tensor(input_ids)
  attention_masks = torch.tensor(attention_masks)

  return input_ids, attention_masks
In [51]:
import numpy as np
np.mean([len(x.split(' ')) for x in train_df['combined']])
Out[51]:
785.3571634615384
In [53]:
MAX_LEN = 100

train_inputs, train_masks = bert_preprocessing(X_train)
val_inputs, val_masks = bert_preprocessing(X_val)

Batching Training and Validation data using DataLoader

In [54]:
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler

train_labels = torch.tensor(y_train.values)
val_labels = torch.tensor(y_val.values)

batch_size = 32
In [55]:
train_data = TensorDataset(train_inputs, train_masks, train_labels)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler = train_sampler, batch_size = batch_size)
In [56]:
val_data = TensorDataset(val_inputs, val_masks, val_labels)
val_sampler = RandomSampler(val_data)
val_dataloader = DataLoader(val_data, sampler = val_sampler, batch_size = batch_size)
Ryan

Ryan

Data Scientist

Leave a Reply