Graph neural network: Difference between revisions

Content deleted Content added
m Corrected wrong capitalisation in a subscript
m Fixed capitalisation in convolutional neural networks
Line 22:
* <em>Permutation equivariant</em>: a permutation equivariant layer [[Map (mathematics)|maps]] a representation of a graph into an updated representation of the same graph. In literature, permutation equivariant layers are implemented via pairwise message passing between graph nodes.<ref name=bronstein2021 /><ref name=velickovic2022 /> Intuitively, in a message passing layer, nodes <em>update</em> their representations by <em>aggregating</em> the <em>messages</em> received from their immediate neighbours. As such, each message passing layer increases the receptive field of the GNN by one hop.
 
* <em>Local pooling</em>: a local pooling layer coarsens the graph via downsampling. Local pooling is used to increase the receptive field of a GNN, in a similar fashion to pooling layers in [[Convolutionalconvolutional neural network]]s. Examples include [[Nearest neighbor graph|k-nearest neighbours pooling]], top-k pooling,<ref name=gao2019 /> and self-attention pooling.<ref name=lee2019 />
 
* <em>Global pooling</em>: a global pooling layer, also known as ''readout'' layer, provides fixed-size representation of the whole graph. The global pooling layer must be permutation invariant, such that permutations in the ordering of graph nodes and edges do not alter the final output.<ref name=lui2022 /> Examples include element-wise sum, mean or maximum.