Content deleted Content added
NickDiCicco (talk | contribs) mNo edit summary |
NickDiCicco (talk | contribs) mNo edit summary |
||
Line 45:
Graph nodes in an MPNN update their representation aggregating information from their immediate neighbours. As such, stacking <math>n</math> MPNN layers means that one node will be able to communicate with nodes that are at most <math>n</math> "hops" away. In principle, to ensure that every node receives information from every other node, one would need to stack a number of MPNN layers equal to the graph [[Distance (graph theory) | diameter]]. However, stacking many MPNN layers may cause issues such as oversmoothing<ref name=chen2021 /> and oversquashing<ref name=alon2021 />. Oversmoothing refers to the issue of node representations becoming indistinguishable. Oversquashing refers to the bottleneck that is created by squeezing long-range dependencies into fixed-size representations.
Other "flavours" of MPNN have been developed in the literature<ref name=bronstein2021 />, such as
=== Graph convolutional network ===
Line 62:
A limitation of GCNs is that they do not allow multidimensional edge features <math>\mathbf{e}_{uv}</math><ref name=kipf2016 />. It is however possible to associate scalar weights <math>w_{uv}</math> to each edge by imposing <math>A_{uv} = w_{uv}</math>, i.e., by setting each nonzero entry in the adjacency matrix equal to the weight of the corresponding edge.
=== Graph
The graph attention network (GAT) was introduced by [[Petar Veličković]] et al. in 2018<ref name=velickovic2018 />.
|