Content deleted Content added
improvement. |
Fixed arXiv citations to arxiv paper directory instead of specific pdf |
||
Line 1:
A '''graph neural network (GNN)''' is a class of [[neural network]] for processing data best represented by [[Graph (abstract data type)|graph data structures]].<ref>{{Cite journal|last1=Scarselli|first1=Franco|last2=Gori|first2=Marco|last3=Tsoi|first3=Ah Chung|last4=Hagenbuchner|first4=Markus|last5=Monfardini|first5=Gabriele|date=2009|title=The Graph Neural Network Model|url=https://ieeexplore.ieee.org/document/4700287|journal=IEEE Transactions on Neural Networks|volume=20|issue=1|pages=61–80|doi=10.1109/TNN.2008.2005605|pmid=19068426|s2cid=206756462|issn=1941-0093}}</ref><ref>{{Cite journal|last1=Sanchez-Lengeling|first1=Benjamin|last2=Reif|first2=Emily|last3=Pearce|first3=Adam|last4=Wiltschko|first4=Alex|date=2021-09-02|title=A Gentle Introduction to Graph Neural Networks|url=https://distill.pub/2021/gnn-intro|journal=Distill|language=en|volume=6|issue=9|pages=e33|doi=10.23915/distill.00033|issn=2476-0757|doi-access=free}}</ref><ref>{{Cite journal|last=Daigavane|first=Ameya|last2=Ravindran|first2=Balaraman|last3=Aggarwal|first3=Gaurav|date=2021-09-02|title=Understanding Convolutions on Graphs|url=https://distill.pub/2021/understanding-gnns|journal=Distill|language=en|volume=6|issue=9|pages=e32|doi=10.23915/distill.00032|issn=2476-0757}}</ref> They were popularized by their use in [[supervised learning]] on properties of various molecules.<ref>{{Cite journal|last1=Gilmer|first1=Justin|last2=Schoenholz|first2=Samuel S.|last3=Riley|first3=Patrick F.|last4=Vinyals|first4=Oriol|last5=Dahl|first5=George E.|date=2017-07-17|title=Neural Message Passing for Quantum Chemistry|url=http://proceedings.mlr.press/v70/gilmer17a.html|journal=International Conference on Machine Learning|language=en|publisher=PMLR|pages=1263–1272|arxiv=1704.01212}}</ref>
Since their inception, variants of the message passing neural network (MPNN) framework have been proposed.<ref>{{Cite journal|last1=Kipf|first1=Thomas N|last2=Welling|first2=Max|date=2016|title=Semi-supervised classification with graph convolutional networks|url=https://ieeexplore.ieee.org/document/4700287|journal=International Conference on Learning Representations|volume=5|issue=1|pages=61–80|doi=10.1109/TNN.2008.2005605|pmid=19068426|arxiv=1609.02907|s2cid=206756462}}</ref><ref>{{Cite journal|last1=Defferrard|first1=Michaël|last2=Bresson|first2=Xavier|last3=Vandergheynst|first3=Pierre|date=2017-02-05|title=Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering|arxiv=1606.09375|journal=Neural Information Processing Systems|volume=30}}</ref><ref>{{Cite journal|last1=Hamilton|first1=William|last2=Ying|first2=Rex|last3=Leskovec|first3=Jure|date=2017|title=Inductive Representation Learning on Large Graphs|url=https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf|journal=Neural Information Processing Systems|volume=31|arxiv=1706.02216|via=Stanford}}</ref><ref>{{Cite journal|last1=Veličković|first1=Petar|last2=Cucurull|first2=Guillem|last3=Casanova|first3=Arantxa|last4=Romero|first4=Adriana|last5=Liò|first5=Pietro|last6=Bengio|first6=Yoshua|date=2018-02-04|title=Graph Attention Networks|arxiv=1710.10903|journal=International Conference on Learning Representations|volume=6}}</ref> These models optimize GNNs for use on larger graphs and apply them to domains such as [[social network]]s, [[Citation graph|citation networks]], and online communities.<ref>{{Cite web|title=Stanford Large Network Dataset Collection|url=https://snap.stanford.edu/data/|access-date=2021-07-05|website=snap.stanford.edu}}</ref> GNNs have also been relatively successful in various [[NP-hard]] combinatorial problems, [[automated planning]] and [[path-planning]] areas due to the inherent graph structure of data. <ref>{{cite journal |last1=Li |first1=Zhuwen |last2=Chen |first2=Qifeng |last3=Koltun |first3=Vladlen |title=Combinatorial optimization with graph convolutional networks and guided tree search |journal=Neural Information Processing Systems |date=2018 |volume=31 |page=537--546 |url=https://arxiv.org/
GNNs are a weak form of the [[Boris Weisfeiler|Weisfeiler]]–Lehman graph isomorphism test,<ref>{{cite arXiv|last=Douglas|first=B. L.|date=2011-01-27|title=The Weisfeiler–Lehman Method and Graph Isomorphism Testing|class=math.CO|eprint=1101.5211}}</ref> so any GNN model is at least as powerful as this test.<ref>{{Cite journal|last1=Xu|first1=Keyulu|last2=Hu|first2=Weihua|last3=Leskovec|first3=Jure|last4=Jegelka|first4=Stefanie|date=2019-02-22|title=How Powerful are Graph Neural Networks?|arxiv=1810.00826|journal=International Conference on Learning Representations|volume=7}}</ref> Researchers are attempting to unite GNNs with other "geometric deep learning models"<ref>{{Cite journal|last1=Bronstein|first1=Michael M.|last2=Bruna|first2=Joan|last3=LeCun|first3=Yann|last4=Szlam|first4=Arthur|last5=Vandergheynst|first5=Pierre|date=2017|title=Geometric Deep Learning: Going beyond Euclidean data|url=https://ieeexplore.ieee.org/document/7974879|journal=IEEE Signal Processing Magazine|volume=34|issue=4|pages=18–42|doi=10.1109/MSP.2017.2693418|issn=1053-5888|arxiv=1611.08097|bibcode=2017ISPM...34...18B|s2cid=15195762}}</ref> to better understand how and why these models work.
|