Graph neural network

This is an old revision of this page, as edited by Citation bot (talk | contribs) at 19:18, 25 July 2021 (Alter: url. URLs might have been anonymized. Add: bibcode, eprint, class, doi, pages, issue, arxiv, s2cid, pmid, authors 1-1. Removed parameters. Formatted dashes. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 30/71). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A graph neural network (GNN) is a class of neural networks for processing data represented by graph data structures.[1] They were popularized by their use in supervised learning on properties of various molecules.[2]

Since their inception, several variants of the simple message passing neural network (MPNN) framework have been proposed.[3][4][5][6] These models optimize GNNs for use on larger graphs and apply them to domains such as social networks, citation networks, and online communities.[7]

It has been mathematically proven that GNNs are a weak form of the Weisfeiler–Lehman graph isomorphism test,[8] so any GNN model is at least as powerful as this test.[9] There is now growing interest in uniting GNNs with other so-called "geometric deep learning models"[10] to better understand how and why these models work.

References

  1. ^ Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele (2009). "The Graph Neural Network Model". IEEE Transactions on Neural Networks. 20 (1): 61–80. doi:10.1109/TNN.2008.2005605. ISSN 1941-0093. PMID 19068426. S2CID 206756462.
  2. ^ Gilmer, Justin; Schoenholz, Samuel S.; Riley, Patrick F.; Vinyals, Oriol; Dahl, George E. (2017-07-17). "Neural Message Passing for Quantum Chemistry". International Conference on Machine Learning. PMLR: 1263–1272. arXiv:1704.01212.
  3. ^ Kipf, Thomas N; Welling, Max (2016). "Semi-supervised classification with graph convolutional networks". International Conference on Learning Representations. 5 (1): 61–80. arXiv:1609.02907. doi:10.1109/TNN.2008.2005605. PMID 19068426. S2CID 206756462.
  4. ^ Defferrard, Michaël; Bresson, Xavier; Vandergheynst, Pierre (2017-02-05). "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering". Neural Information Processing Systems. 30. arXiv:1606.09375.
  5. ^ Hamilton, William; Ying, Rex; Leskovec, Jure (2017). "Inductive Representation Learning on Large Graphs" (PDF). Neural Information Processing Systems. 31. arXiv:1706.02216 – via Stanford.
  6. ^ Veličković, Petar; Cucurull, Guillem; Casanova, Arantxa; Romero, Adriana; Liò, Pietro; Bengio, Yoshua (2018-02-04). "Graph Attention Networks". International Conference on Learning Representations. 6. arXiv:1710.10903.
  7. ^ "Stanford Large Network Dataset Collection". snap.stanford.edu. Retrieved 2021-07-05.
  8. ^ Douglas, B. L. (2011-01-27). "The Weisfeiler–Lehman Method and Graph Isomorphism Testing". arXiv:1101.5211 [math.CO].
  9. ^ Xu, Keyulu; Hu, Weihua; Leskovec, Jure; Jegelka, Stefanie (2019-02-22). "How Powerful are Graph Neural Networks?". International Conference on Learning Representations. 7. arXiv:1810.00826.
  10. ^ Bronstein, Michael M.; Bruna, Joan; LeCun, Yann; Szlam, Arthur; Vandergheynst, Pierre (2017). "Geometric Deep Learning: Going beyond Euclidean data". IEEE Signal Processing Magazine. 34 (4): 18–42. arXiv:1611.08097. Bibcode:2017ISPM...34...18B. doi:10.1109/MSP.2017.2693418. ISSN 1053-5888. S2CID 15195762.