Graph neural network: Difference between revisions

Content deleted Content added
No edit summary
Line 1:
{{Machine learning|Artificial neural network}}
 
A '''Graph Neuralneural Networknetwork''' ('''GNN)''') is a class of [[artificial neural network]]s for processing data that can be represented as [[Graph (abstract data type)|graphs]]<ref name="scarselli2008 /><ref name="sanchez2021" /><ref name="daigavane2021" />.
 
[[File:GNN building blocks.png|thumb|Basic building blocks of a Graph Neural Network (GNN). <math>(1)</math> Permutation equivariant layer. <math>(2)</math> Local pooling layer. <math>(3)</math> Global pooling (or readout) layer. Colors indicate [[Feature (machine learning)|features]].]]
 
In the more general subject of "Geometric [[Deep Learning]]", existing neural network architectures can be interpreted as GNNs operating on suitably defined graphs<ref name=bronstein2021 />. [[Convolutional Neuralneural Networknetwork]]s, in the context of [[computer vision]], can be seen as a GNN applied to graphs structured as grids of [[pixel]]s. [[Transformer (machine learning model) | Transformers]], in the context of [[natural language processing]], can be seen as GNNs applied to [[complete graph]]s whose nodes are [[words]] in a [[Sentence (linguistics) | sentence]].
 
The key design element of GNNs is the use of ''pairwise message passing'', such that graph nodes iteratively update their representations by exchanging information with their neighbors. Since their inception, several different GNN architectures have been proposed<ref name="scarselli2008" /><ref name="kipf2016" /><ref name="hamilton2017" /><ref name="velickovic2018" />, which implement different flavors of message passing<ref name=bronstein2021 />. {{As of|2022}}, whether is it possible to define GNN architectures "going beyond" message passing, or if every GNN can be built on message passing over suitably defined graphs, is an open research question<ref name=velickovic2022 />.
Line 22:
* <em>Permutation Equivariant</em>: a permutation equivariant layer [[Map (mathematics)|maps]] a representation of a graph into an updated representation of the same graph. In literature, permutation equivariant layers are implemented via pairwise message passing between graph nodes<ref name=bronstein2021 /><ref name=velickovic2022 />. Intuitively, in a message passing layer, nodes ''update'' their representations by ''aggregating'' the ''messages'' received from their immediate neighbours. As such, each message passing layer increases the receptive field of the GNN by one hop.
 
* <em>Local Pooling</em>: a local pooling layer coarsens the graph via downsampling. Local pooling is used to increase the receptive field of a GNN, in a similar fashion to pooling layers in [[Convolutional Neuralneural Networknetwork]]s. Examples include [[Nearest neighbor graph|k-nearest neighbours pooling]], top-k pooling<ref name=gao2019 />, and self-attention pooling<ref name=lee2019 />.
 
* <em>Global Pooling</em>: a global pooling layer, also known as ''readout'' layer, provides fixed-size representation of the whole graph. The global pooling layer must be permutation invariant, such that permutations in the ordering of graph nodes and edges do not alter the final output<ref name=lui2022 />. Examples include element-wise sum, mean or maximum.