This article is one of two Distill publications about graph neural networks.
Take a look at
A Gentle Introduction to Graph Neural Networks
for a companion view on many things graph and neural network related.
Many systems and interactions – social networks, molecules, organizations, citations, physical models, transactions – can be represented quite naturally as graphs.
How can we reason about and make predictions within these systems?
One idea is to look at tools that have worked well in other domains: neural networks have shown immense predictive power in a variety of learning tasks.
However, neural networks have been traditionally used to operate on fixed-size and/or regular-structured inputs (such as sentences, images and video).
This makes them unable to elegantly process graph-structured data.
Graph neural networks (GNNs) are a family of neural networks that can operate naturally on graph-structured data.
By extracting and utilizing features from the underlying graph,
GNNs can make more informed predictions about entities in these interactions,
as compared to models that consider individual entities in isolation.
GNNs are not the only tools available to model graph-structured data:
graph kernels
and random-walk methods
were some of the most popular ones.
Today, however, GNNs have largely replaced these techniques
because of their inherent flexibility to model the underlying systems
better.
In this article, we will illustrate
the challenges of computing over graphs,
describe the origin and design of graph neural networks,
and explore the most popular GNN variants in recent times.
Particularly, we will see that many of these variants
are composed of similar building blocks.
First, let’s discuss some of the complications that graphs come with.
The Challenges of Computation on Graphs
Lack of Consistent Structure
Graphs are extremely flexible mathematical models; but this means they lack consistent structure across instances.
Consider the task of predicting whether a given chemical molecule is toxic
Looking at a few examples, the following issues quickly become apparent:
- Molecules may have different numbers of atoms.
- The atoms in a molecule may be of different types.
- Each of these atoms may have different number of connections.
- These connections can have different strengths.
Representing graphs in a format that can be computed over is non-trivial,
and the final representation chosen often depends significantly on the actual problem.
Node-Order Equivariance
Extending the point above: graphs often have no inherent ordering present amongst the nodes.
Compare this to images, where every pixel is uniquely determined by its absolute position within the image!
But what do we do when the nodes have no inherent order?
Above:
The same graph labelled in two different ways. The alphabets indicate the ordering of the nodes.
As a result, we would like our algorithms to be node-order equivariant:
they should not depend on the ordering of the nodes of the graph.
If we permute the nodes in some way, the resulting representations of
the nodes as computed by our algorithms should also be permuted in the same way.
Scalability
Graphs can be really large! Think about social networks like Facebook and Twitter, which have over a billion users.
Operating on data this large is not easy.
Luckily, most naturally occuring graphs are ‘sparse’:
they tend to have their number of edges linear in their number of vertices.
We will see that this allows the use of clever methods
to efficiently compute representations of nodes within the graph.
Further, the methods that we look at here will have significantly fewer parameters
in comparison to the size of the graphs they operate on.
Problem Setting and Notation
There are many useful problems that can be formulated over graphs:
- Node Classification: Classifying individual nodes.
- Graph Classification: Classifying entire graphs.
- Node Clustering: Grouping together similar nodes based on connectivity.
- Link Prediction: Predicting missing links.
- Influence Maximization: Identifying influential nodes.
This list is not exhaustive!
A common precursor in solving many of these problems is node representation learning:
learning to map individual nodes to fixed-size real-valued vectors (called ‘representations’ or ‘embeddings’).
In Learning GNN Parameters, we will see how the learnt embeddings can be used for these tasks.
Different GNN variants are distinguished by the way these representations are computed.
Generally, however, GNNs compute node representations in an iterative process.
We will use the notation to indicate the representation of node after the iteration.
Each iteration can be thought of as the equivalent of a ‘layer’ in standard neural networks.
We will define a graph as a set of nodes, , with a set of edges connecting them.
Nodes can have individual features as part of the input: we will denote by the individual feature for node .
For example, the ‘node features’ for a pixel in a color image
would be the red, green and blue channel (RGB) values at that pixel.
For ease of exposition, we will assume is undirected, and all nodes are of the same type.
Many of the same ideas we will see here
apply to other kinds of graphs:
we will discuss this later in Different Kinds of Graphs.
Sometimes we will need to denote a graph property by a matrix ,
where each row represents a property corresponding to a particular vertex .
Extending Convolutions to Graphs
Convolutional Neural Networks have been seen to be quite powerful in extracting features from images.
However, images themselves can be seen as graphs with a very regular grid-like structure,
where the individual pixels are nodes, and the RGB channel values at each pixel as the node features.
A natural idea, then, is to consider generalizing convolutions to arbitrary graphs. Recall, however, the challenges
listed out in the previous section: in particular, ordinary convolutions are not node-order invariant, because
they depend on the absolute positions of pixels.
It is initially unclear as how to generalize convolutions over grids to convolutions over general graphs,
where the neighbourhood structure differs from node to node.
The curious reader may wonder if performing some sort of padding and ordering
could be done to ensure the consistency of neighbourhood structure across nodes.
This has been attempted with some success
but the techniques we will look at here are more general and powerful.
Neighbours participating in the convolution at the center pixel are highlighted in gray.
Hover over a node to see its immediate neighbourhood highlighted on the left.
The structure of this neighbourhood changes from node to node.
We begin by introducing the idea of constructing polynomial filters over node neighbourhoods,
much like how CNNs compute localized filters over neighbouring pixels.
Then, we will see how more recent approaches extend on this idea with more powerful mechanisms.
Finally, we will discuss alternative methods
that can use ‘global’ graph-level information for computing node representations.
Polynomial Filters on Graphs
The Graph Laplacian
Given a graph , let us fix an arbitrary ordering of the nodes of .
We denote the adjacency matrix of by , we can construct the diagonal degree matrix of as:
where denotes the entry in the row corresponding to and the column corresponding to
in the matrix . We will use this notation throughout this section.
Then, the graph Laplacian is the square matrix defined as:
Zeros in are not displayed above.
The Laplacian depends only on the structure of the graph , not on any node features.
The graph Laplacian gets its name from being the discrete analog of the
Laplacian operator
from calculus.
Although it encodes precisely the same information as the adjacency matrix
In the sense that given either of the matrices or , you can construct the other.
the graph Laplacian has many interesting properties of its own.
The graph Laplacian shows up in many mathematical problems involving graphs:
random walks,
spectral clustering
and
diffusion, to name a few.
We will see some of these properties
in a later section,
but will instead point readers to
this tutorial
for greater insight into the graph Laplacian.
Polynomials of the Laplacian
Now that we have understood what the graph Laplacian is,
we can build polynomials
Each polynomial of this form can alternately be represented by
its vector of coefficients .
Note that for every , is an matrix, just like .
These polynomials can be thought of as the equivalent of ‘filters’ in CNNs,
and the coefficients as the weights of the ‘filters’.
For ease of exposition, we will focus on the case where nodes have one-dimensional features:
each of the for is just a real number.
The same ideas hold when each of the are higher-dimensional vectors, as well.
Using the previously chosen ordering of the nodes,
we can stack all of the node features
to get a vector .
Once we have constructed the feature vector ,
we can define its convolution with a polynomial filter as:
To understand how the coefficients affect the convolution,
let us begin by considering the ‘simplest’ polynomial:
when and all of the other coefficients are .
In this case, is just :
Now, if we increase the degree, and consider the case where
instead and and all of the other coefficients are .
Then, is just , and so:
We see that the features at each node are combined
with the features of its immediate neighbours .
For readers familiar with
Laplacian filtering of images,
this is the exact same idea. When is an image,
is exactly the result of applying a ‘Laplacian filter’ to .
At this point, a natural question to ask is:
How does the degree of the polynomial influence the behaviour of the convolution?
Indeed, it is not too hard to show that:
This implies, when we convolve with of degree to get :
Effectively, the convolution at node occurs only with nodes which are not more than hops away.
Thus, these polynomial filters are localized. The degree of the localization is governed completely by .
To help you understand these ‘polynomial-based’ convolutions better, we have created the visualization below.
Vary the polynomial coefficients and the input grid to see how the result of the convolution changes.
The grid under the arrow shows the equivalent convolutional kernel applied at the highlighted pixel in to get
the resulting pixel in .
The kernel corresponds to the row of for the highlighted pixel.
Note that even after adjusting for position,
this kernel is different for different pixels, depending on their position within the grid.
Hover over a pixel in the input grid (left, representing )
to highlight it and see the equivalent convolutional kernel
for that pixel under the arrow.
The result of the convolution is shown on the right:
note that different convolutional kernels are applied at different pixels,
depending on their location.
Click on the input grid to toggle pixel values between (white) and (blue).
To randomize the input grid, press ‘Randomize Grid’. To reset all pixels to , press ‘Reset Grid’.
Use the sliders at the bottom to change the coefficients .
To reset all coefficients to , press ‘Reset Coefficients.’
ChebNet
ChebNet
where is the degree-
Chebyshev polynomial of the first kind and
is the normalized Laplacian defined using the largest eigenvalue of :
We discuss the eigenvalues of the Laplacian in more detail in a later section.
What is the motivation behind these choices?
-
is actually positive semi-definite: all of the eigenvalues of are not lesser than .
If , the entries in the powers of rapidly increase in size.
is effectively a scaled-down version of , with eigenvalues guaranteed to be in the range .
This prevents the entries of powers of from blowing up.
Indeed, in the visualization above: we restrict the higher-order coefficients
when the unnormalized Laplacian is selected, but allow larger values when the normalized Laplacian is selected,
in order to show the result on the same color scale. -
The Chebyshev polynomials have certain interesting properties that make interpolation more numerically stable.
We won’t talk about this in more depth here,
but will advise interested readers to take a look atas a definitive resource.
Polynomial Filters are Node-Order Equivariant
The polynomial filters we considered here are actually independent of the ordering of the nodes.
This is particularly easy to see when the degree of the polynomial is :
where each node’s feature is aggregated with the sum of its neighbour’s features.
Clearly, this sum does not depend on the order of the neighbours.
A similar proof follows for higher degree polynomials:
the entries in the powers of are equivariant to the ordering of the nodes.
As above, let’s assume an arbitrary node-order over the nodes of our graph.
Any other node-order can be thought of as a permutation of this original node-order.
We can represent any permutation by a
permutation matrix .
will always be an orthogonal matrix:
Then, we call a function node-order equivariant iff for all permutations :
When switching to the new node-order using the permutation ,
the quantities below transform in the following way:
and so, for the case of polynomial filters where , we can see that:
as claimed.
Embedding Computation
We now describe how we can build a graph neural network
by stacking ChebNet (or any polynomial filter) layers
one after the other with non-linearities,
much like a standard CNN.
In particular, if we have different polynomial filter layers,
the of which has its own learnable weights ,
we would perform the following computation:
Note that these networks
reuse the same filter weights across different nodes,
exactly mimicking weight-sharing in Convolutional Neural Networks (CNNs)
which reuse weights for convolutional filters across a grid.
Modern Graph Neural Networks
ChebNet was a breakthrough in learning localized filters over graphs,
and it motivated many to think of graph convolutions from a different perspective.
We return back to the result of convolving by the polynomial kernel ,
focussing on a particular vertex :
As we noted before, this is a -hop localized convolution.
But more importantly, we can think of this convolution as arising of two steps:
- Aggregating over immediate neighbour features .
- Combining with the node’s own feature .
Key Idea:
What if we consider different kinds of ‘aggregation’ and ‘combination’ steps,
beyond what are possible using polynomial filters?
By ensuring that the aggregation is node-order equivariant,
the overall convolution becomes node-order equivariant.
These convolutions can be thought of as ‘message-passing’ between adjacent nodes:
after each step, every node receives some ‘information’ from its neighbours.
By iteratively repeating the -hop localized convolutions times (i.e., repeatedly ‘passing messages’),
the receptive field of the convolution effectively includes all nodes upto hops away.
Embedding Computation
Message-passing forms the backbone of many GNN architectures today.
We describe the most popular ones in depth below:
- Graph Convolutional Networks (GCN)
- Graph Attention Networks (GAT)
- Graph Sample and Aggregate (GraphSAGE)
- Graph Isomorphism Network (GIN)
Thoughts
An interesting point is to assess different aggregation functions: are some better and others worse?
they can uniquely preserve node neighbourhood features;
we recommend the interested reader take a look at the detailed theoretical analysis there.
Here, we’ve talk about GNNs where the computation only occurs at the nodes.
More recent GNN models
such as Message-Passing Neural Networks
and Graph Networks
perform computation over the edges as well;
they compute edge embeddings together with node embeddings.
This is an even more general framework –
but the same ‘message passing’ ideas from this section apply.
Interactive Graph Neural Networks
Below is an interactive visualization of these GNN models on small graphs.
For clarity, the node features are just real numbers here, shown inside the squares next to each node,
but the same equations hold when the node features are vectors.
Use the sliders on the left to change the weights for the current iteration, and watch how the update equation changes.
In practice, each iteration above is generally thought of as a single ‘neural network layer’.
This ideology is followed by many popular Graph Neural Network libraries,
For example: PyTorch Geometric
and StellarGraph.
allowing one to compose different types of graph convolutions in the same model.
From Local to Global Convolutions
The methods we’ve seen so far perform ‘local’ convolutions:
every node’s feature is updated using a function of its local neighbours’ features.
While performing enough steps of message-passing will eventually ensure that
information from all nodes in the graph is passed,
one may wonder if there are more direct ways to perform ‘global’ convolutions.
The answer is yes; we will now describe an approach that was actually first put forward
in the context of neural networks by
much before any of the GNN models we looked at above.
Spectral Convolutions
As before, we will focus on the case where nodes have one-dimensional features.
After choosing an arbitrary node-order, we can stack all of the node features to get a
‘feature vector’ .
Key Idea:
Given a feature vector ,
the Laplacian allows us to quantify how smooth is, with respect to .
How?
After normalizing such that ,
if we look at the following quantity involving :
is formally called the Rayleigh quotient.
we immediately see that feature vectors that assign similar values to
adjacent nodes in (hence, are smooth) would have smaller values of .
is a real, symmetric matrix, which means it has all real eigenvalues .
An eigenvalue of a matrix is a value
satisfying the equation for a certain vector , called an eigenvector.
For a friendly introduction to eigenvectors,
please see this tutorial.
Further, the corresponding eigenvectors can be taken to be orthonormal:
It turns out that these eigenvectors of are successively less smooth, as indicates:
The set of eigenvalues of are called its ‘spectrum’, hence the name!
We denote the ‘spectral’ decomposition of as:
where is the diagonal matrix of sorted eigenvalues,
and denotes the matrix of the eigenvectors (sorted corresponding to increasing eigenvalues):
The orthonormality condition between eigenvectors gives us that , the identity matrix.
As these eigenvectors form a basis for ,
any feature vector can be represented as a linear combination of these eigenvectors:
where is the vector of coefficients .
We call as the spectral representation of the feature vector .
The orthonormality condition allows us to state:
This pair of equations allows us to interconvert
between the ‘natural’ representation and the ‘spectral’ representation
for any vector .
Spectral Representations of Natural Images
As discussed before, we can consider any image as a grid graph, where each pixel is a node,
connected by edges to adjacent pixels.
Thus, a pixel can have either or neighbours, depending on its location within the image grid.
Each pixel gets a value as part of the image. If the image is grayscale, each value will be a single
real number indicating how dark the pixel is. If the image is colored, each value will be a -dimensional
vector, indicating the values for the red, green and blue (RGB) channels.
This construction allows us to compute the graph Laplacian and the eigenvector matrix .
Given an image, we can then investigate what its spectral representation looks like.
To shed some light on what the spectral representation actually encodes,
we perform the following experiment over each channel of the image independently:
- We first collect all pixel values across a channel into a feature vector .
-
Then, we obtain its spectral representation .
-
We truncate this to the first components to get .
By truncation, we mean zeroing out all of the remaining components of .
This truncation is equivalent to using only the first eigenvectors to compute the spectral representation.
-
Then, we convert this truncated representation back to the natural basis to get .
Finally, we stack the resulting channels back together to get back an image.
We can now see how the resulting image changes with choices of .
Note that when , the resulting image is identical to the original image,
as we can reconstruct each channel exactly.
Each of these images has been taken from the ImageNet
dataset and downsampled to pixels wide and pixels tall.
As there are pixels in each image, there are Laplacian eigenvectors.
Use the slider at the bottom to change the number of spectral components to keep, noting how
images get progressively blurrier as the number of components decrease.
As decreases, we see that the output image gets blurrier.
If we decrease to , the output image is entirely the same color throughout.
We see that we do not need to keep all components;
we can retain a lot of the information in the image with significantly fewer components.
We can relate this to the Fourier decomposition of images:
the more eigenvectors we use, the higher frequencies we can represent on the grid.
To complement the visualization above,
we additionally visualize the first few eigenvectors on a smaller grid below.
We change the coefficients of the first out of eigenvectors
in the spectral representation
and see how the resulting image changes:
and see how itself changes on the image (left).
Note how the first eigenvectors are much ‘smoother’ than the later ones,
and the many patterns we can make with only eigenvectors.
These visualizations should convince you that the first eigenvectors are indeed smooth,
and the smoothness correspondingly decreases as we consider later eigenvectors.
For any image , we can think of
the initial entries of the spectral representation
as capturing ‘global’ image-wide trends, which are the low-frequency components,
while the later entries as capturing ‘local’ details, which are the high-frequency components.
Embedding Computation
We now have the background to understand spectral convolutions
and how they can be used to compute embeddings/feature representations of nodes.
As before, the model we describe below has layers:
each layer has learnable parameters ,
called the ‘filter weights’.
These weights will be convolved with the spectral representations of the node features.
As a result, the number of weights needed in each layer is equal to , the number of
eigenvectors used to compute the spectral representations.
We had shown in the previous section that we can take
and still not lose out on significant amounts of information.
Thus, convolution in the spectral domain enables the use of significantly fewer parameters
than just direct convolution in the natural domain.
Further, by virtue of the smoothness of the Laplacian eigenvectors across the graph,
using spectral representations automatically enforces an inductive bias for
neighbouring nodes to get similar representations.
Assuming one-dimensional node features for now,
the output of each layer is a vector of node representations ,
where each node’s representation corresponds to a row
of the vector.
We fix an ordering of the nodes in . This gives us the adjacency matrix and the graph Laplacian ,
allowing us to compute .
Finally, we can describe the computation that the layers perform, one after the other:
The method above generalizes easily to the case where each , as well:
see
With the insights from the previous section, we see that convolution in the spectral-domain of graphs
can be thought of as the generalization of convolution in the frequency-domain of images.
Spectral Convolutions are Node-Order Equivariant
We can show spectral convolutions are node-order equivariant using a similar approach
as for Laplacian polynomial filters.
Details for the Interested Reader
As in our proof before,
let’s fix an arbitrary node-order.
Then, any other node-order can be represented by a
permutation of this original node-order.
We can associate this permutation with its permutation matrix .
Under this new node-order,
the quantities below transform in the following way:
which implies that, in the embedding computation:
Hence, as is applied elementwise:
as required.
Further, we see that the spectral quantities and
are unchanged by permutations of the nodes.
Formally, they are what we would call node-order invariant.
The theory of spectral convolutions is mathematically well-grounded;
however, there are some key disadvantages that we must talk about:
- We need to compute the eigenvector matrix from . For large graphs, this becomes quite infeasible.
-
Even if we can compute , global convolutions themselves are inefficient to compute,
because of the repeated
multiplications with and . -
The learned filters are specific to the input graphs,
as they are represented in terms
of the spectral decomposition of input graph Laplacian .
This means they do not transfer well to new graphs
which have significantly different structure (and hence, significantly
different eigenvalues).
While spectral convolutions have largely been superseded by
‘local’ convolutions for the reasons discussed above,
there is still much merit to understanding the ideas behind them.
Indeed, a recently proposed GNN model called Directional Graph Networks
actually uses the Laplacian eigenvectors
and their mathematical properties
extensively.
Global Propagation via Graph Embeddings
A simpler way to incorporate graph-level information
is to compute embeddings of the entire graph by pooling node
(and possibly edge) embeddings,
and then using the graph embedding to update node embeddings,
following an iterative scheme similar to what we have looked at here.
This is an approach used by Graph Networks
We will briefly discuss how graph-level embeddings
can be constructed in Pooling.
However, such approaches tend to ignore the underlying
topology of the graph that spectral convolutions can capture.
Learning GNN Parameters
All of the embedding computations we’ve described here, whether spectral or spatial, are completely differentiable.
This allows GNNs to be trained in an end-to-end fashion, just like a standard neural network,
once a suitable loss function is defined:
-
Node Classification: By minimizing any of the standard losses for classification tasks,
such as categorical cross-entropy when multiple classes are present:
where is the predicted probability that node is in class .
GNNs adapt well to the semi-supervised setting, which is when only some nodes in the graph are labelled.
In this setting, one way to define a loss over an input graph is:
where, we only compute losses over labelled nodes . -
Graph Classification: By aggregating node representations,
one can construct a vector representation of the entire graph.
This graph representation can be used for any graph-level task, even beyond classification.
See Pooling for how representations of graphs can be constructed. -
Link Prediction: By sampling pairs of adjacent and non-adjacent nodes,
and use these vector pairs as inputs to predict the presence/absence of an edge.
For a concrete example, by minimizing the following ‘logistic regression’-like loss:
where is the sigmoid function,
and iff there is an edge between nodes and , being otherwise. - Node Clustering: By simply clustering the learned node representations.
The broad success of pre-training for natural language processing models
such as ELMo
has sparked interest in similar techniques for GNNs
The key idea in each of these papers is to train GNNs to predict
local (eg. node degrees, clustering coefficient, masked node attributes)
and/or global graph properties (eg. pairwise distances, masked global attributes).
Another self-supervised technique is to enforce that neighbouring nodes get similar embeddings,
mimicking random-walk approaches such as node2vec
where is a multi-set of nodes visited when random walks are started from .
For large graphs, where computing the sum over all nodes may be computationally expensive,
techniques such as Noise Contrastive Estimation
Conclusion and Further Reading
While we have looked at many techniques and ideas in this article,
the field of Graph Neural Networks is extremely vast.
We have been forced to restrict our discussion to a small subset of the entire literature,
while still communicating the key ideas and design principles behind GNNs.
We recommend the interested reader take a look at
We end with pointers and references for additional concepts readers might be interested in:
GNNs in Practice
It turns out that accomodating the different structures of graphs is often hard to do efficiently,
but we can still represent many GNN update equations using
as sparse matrix-vector products (since generally, the adjacency matrix is sparse for most real-world graph datasets.)
For example, the GCN variant discussed here can be represented as:
Restructuring the update equations in this way allows for efficient vectorized implementations of GNNs on accelerators
such as GPUs.
Regularization techniques for standard neural networks,
such as Dropout
can be applied in a straightforward manner to the parameters
(for example, zero out entire rows of above).
However, there are graph-specific techniques such as DropEdge
that removes entire edges at random from the graph,
that also boost the performance of many GNN models.
Different Kinds of Graphs
Here, we have focused on undirected graphs, to avoid going into too many unnecessary details.
However, there are some simple variants of spatial convolutions for:
- Directed graphs: Aggregate across in-neighbourhood and/or out-neighbourhood features.
- Temporal graphs: Aggregate across previous and/or future node features.
- Heterogeneous graphs: Learn different aggregation functions for each node/edge type.
There do exist more sophisticated techniques that can take advantage of the different structures of these graphs:
see
Pooling
This article discusses how GNNs compute useful representations of nodes.
But what if we wanted to compute representations of graphs for graph-level tasks (for example, predicting the toxicity of a molecule)?
A simple solution is to just aggregate the final node embeddings and pass them through another neural network :
However, there do exist more powerful techniques for ‘pooling’ together node representations:
- SortPool
: Sort vertices of the graph to get a fixed-size node-order invariant representation of the graph, and then apply any standard neural network architecture. - DiffPool
: Learn to cluster vertices, build a coarser graph over clusters instead of nodes, then apply a GNN over the coarser graph. Repeat until only one cluster is left. - SAGPool
: Apply a GNN to learn node scores, then keep only the nodes with the top scores, throwing away the rest. Repeat until only one node is left.
Supplementary Material
Reproducing Experiments
The experiments from
Spectral Representations of Natural Images
can be reproduced using the following
Colab notebook:
Spectral Representations of Natural Images.
Recreating Visualizations
To aid in the creation of future interactive articles,
we have created ObservableHQ
notebooks for each of the interactive visualizations here: