Graph neural network

This is an old revision of this page, as edited by Iking5 (talk | contribs) at 19:10, 5 July 2021 (Forgot to make title bold). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A graph neural network (GNN) is a class of neural networks for processing data represented by graph data structures[1]. They were popularized by their use in supervised learning on properties of various molecules[2].

Since their inception, several variants of the simple message passing neural network (MPNN) framework have been proposed[3][4][5][6]. These models optimize GNNs for use on larger graphs and apply them to domains such as social networks, citation networks, and online communities[7].

It has been mathematically proven that GNNs are a weak form of the Weisfeiler-Lehman graph isomorphism test[8], so any GNN model is at least as powerful as this test[9]. There is now growing interest in uniting GNNs with other so-called "geometric deep learning models"[10] to better understand how and why these models work.

  1. ^ Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele (2009). "The Graph Neural Network Model". IEEE Transactions on Neural Networks. 20 (1): 61–80. doi:10.1109/TNN.2008.2005605. ISSN 1941-0093.
  2. ^ Gilmer, Justin; Schoenholz, Samuel S.; Riley, Patrick F.; Vinyals, Oriol; Dahl, George E. (2017-07-17). "Neural Message Passing for Quantum Chemistry". International Conference on Machine Learning. PMLR: 1263–1272.
  3. ^ Kipf, Thomas N; Welling, Max (2016). "Semi-supervised classification with graph convolutional networks". International Conference on Learning Representations. 5 – via arXiv.
  4. ^ Defferrard, Michaël; Bresson, Xavier; Vandergheynst, Pierre (2017-02-05). "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering". Neural Information Processing Systems. 30.
  5. ^ Hamilton, William; Ying, Rex; Leskovec, Jure (2017). "Inductive Representation Learning on Large Graphs" (PDF). Neural Information Processing Systems. 31 – via Stanford.
  6. ^ Veličković, Petar; Cucurull, Guillem; Casanova, Arantxa; Romero, Adriana; Liò, Pietro; Bengio, Yoshua (2018-02-04). "Graph Attention Networks". International Conference on Learning Representations. 6.
  7. ^ "Stanford Large Network Dataset Collection". snap.stanford.edu. Retrieved 2021-07-05.
  8. ^ Douglas, B. L. (2011-01-27). "The Weisfeiler-Lehman Method and Graph Isomorphism Testing". arXiv:1101.5211 [math].
  9. ^ Xu, Keyulu; Hu, Weihua; Leskovec, Jure; Jegelka, Stefanie (2019-02-22). "How Powerful are Graph Neural Networks?". International Conference on Learning Representations. 7.
  10. ^ Bronstein, Michael M.; Bruna, Joan; LeCun, Yann; Szlam, Arthur; Vandergheynst, Pierre (2017). "Geometric Deep Learning: Going beyond Euclidean data". IEEE Signal Processing Magazine. 34 (4): 18–42. doi:10.1109/MSP.2017.2693418. ISSN 1053-5888.