What is Graph Neural Networks (GNNs)?
GNNs was first proposed in [33]. GNNs is a type of deep learning method that focuses on capturing the dependence of graphs using the relations between nodes in the graphs. Graph analysis focuses on node classification, link prediction, and clustering. GNNs can retain a state that can represent information from its neighbourhood with defined depth.
For a GNN, the input is each node represented by its features and related nodes and the output is a state embedding that contains information about the neighbourhood for each node. This state embedding can be used for many tasks, of which one of them is to predict the node label. The learning algorithm is based on gradientdescent. There are few limitations of GNNs:

It updates hidden states of nodes iteratively for the fixed point, which it’s inefficient and the fixed point assumption prevent us from building a multilayer GNN

GNN uses the same parameters in all iterations which it’s the opposite of the traditional NNs that uses different parameters in different layers

GNN fails to capture edge features such as the type of relations. Depending on sequence of paths of edges (relations), these should be model differently and use as additional features

It’s unsuitable to use fixed points if we focus on the representation of nodes instead of graphs as they are less informative in distinguishing between nodes
There are many variants of GNNs:

Graph Convolutional Network (GCN)

Graph Attention Network (GAT)

Graph SpatialTemporal Network

Gated Graph Neural Network (GGNN)

Graph Autoencoders
Why GNNs?
There are various reasons for GNNs:

Inspiration from CNNs. The main features of CNNs are the local connections, shared weights, and multilayer architectures. All these features are useful for tackling graph problems too. After all, graphs have locally connected structure, the shared weights can reduce computational cost, and the multilayer can capture the hierarchical patterns. This led to GNNs as CNNs can only operate on 2D (images) and 1D (text) data

Graph embeddings. As the name suggested, graph embeddings are used to build representation for nodes, edges, and subgraphs. DeepWalk is considered to be the first graph embedding method which uses the SkipGram model on random walks. Other embeddings are node2vec, LINE, and TADW

GNNs are capable of doing propagation and learning based on the graph structure and the dependency (edges) between entities (nodes)

Graphs don’t have any natural order of nodes, which means that using CNNs or RNNs will forcibly put feature of nodes in a specific order. GNNs doesn’t do that

GNNs closely mimic the reasoning process of the human brain and offers high interpretability
There are two problems with the graph embeddings method above:

No parameters are shared between nodes, meaning scalability is an issue as the number of parameters grow with the number of nodes

There’s no generalisation from direct graph embeddings meaning that they cannot be used to deal with dynamic graphs or new graphs
Applications of GNNs
There are many applications of GNNs such as the analysis on social networks and knowledge graphs.