Graph neural network: Difference between revisions

Content deleted Content added
m clean up, replaced: journal=Network and Distributed Systems Security (NDSS) → journal=Network and Distributed Systems Security (2)
Citation bot (talk | contribs)
Add: article-number, bibcode. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 69/990
 
(16 intermediate revisions by 10 users not shown)
Line 1:
{{Short description|Class of artificial neural networks}}
{{Machine learning|Artificial neural network}}
{{Use dmy dates|date=July 2025}}
 
'''Graph neural networks''' ('''GNN''') are specialized [[artificial neural network]]s that are designed for tasks whose inputs are [[Graph (abstract data type)|graphs]].<ref name="wucuipeizhao2022" /><ref name="scarselli2009" /><ref name="micheli2009" /><ref name="sanchez2021" /><ref name="daigavane2021" />
 
One prominent example is molecular drug design.<ref>{{Cite journal |last1=Stokes |first1=Jonathan M. |last2=Yang |first2=Kevin |last3=Swanson |first3=Kyle |last4=Jin |first4=Wengong |last5=Cubillos-Ruiz |first5=Andres |last6=Donghia |first6=Nina M. |last7=MacNair |first7=Craig R. |last8=French |first8=Shawn |last9=Carfrae |first9=Lindsey A. |last10=Bloom-Ackermann |first10=Zohar |last11=Tran |first11=Victoria M. |last12=Chiappino-Pepe |first12=Anush |last13=Badran |first13=Ahmed H. |last14=Andrews |first14=Ian W. |last15=Chory |first15=Emma J. |date=2020-02-20 February 2020 |title=A Deep Learning Approach to Antibiotic Discovery |journal=Cell |volume=180 |issue=4 |pages=688–702.e13 |doi=10.1016/j.cell.2020.01.021 |issn=1097-4172 |pmc=8349178 |pmid=32084340}}</ref><ref>{{Citationcite arXiv|last1=Yang |first1=Kevin |title=Analyzing Learned Molecular Representations for Property Prediction |date=2019-11-20 November 2019 |arxiveprint=1904.01561 |last2=Swanson |first2=Kyle |last3=Jin |first3=Wengong |last4=Coley |first4=Connor |last5=Eiden |first5=Philipp |last6=Gao |first6=Hua |last7=Guzman-Perez |first7=Angel |last8=Hopper |first8=Timothy |last9=Kelley |first9=Brian|class=cs.LG }}</ref><ref>{{Cite journal |last=Marchant |first=Jo |date=2020-02-20 February 2020 |title=Powerful antibiotics discovered using AI |url=https://www.nature.com/articles/d41586-020-00018-3 |journal=Nature |language=en |doi=10.1038/d41586-020-00018-3|pmid=33603175 |url-access=subscription }}</ref> Each input sample is a graph representation of a molecule, where atoms form the nodes and chemical bonds between atoms form the edges. In addition to the graph representation, the input also includes known chemical properties for each of the atoms. Dataset samples may thus differ in length, reflecting the varying numbers of atoms in molecules, and the varying number of bonds between them. The task is to predict the efficacy of a given molecule for a specific medical application, like eliminating [[Escherichia coli|''E. coli'']] bacteria.
 
The key design element of GNNs is the use of ''pairwise message passing'', such that graph nodes iteratively update their representations by exchanging information with their neighbors. Several GNN architectures have been proposed,<ref name="scarselli2009" /><ref name="micheli2009" /><ref name="kipf2016" /><ref name="hamilton2017" /><ref name="velickovic2018" /> which implement different flavors of message passing,<ref name="bronstein2021" /><ref name="hajij2022" /> started by recursive<ref name="scarselli2009" /> or convolutional constructive<ref name="micheli2009" /> approaches. {{As of|2022}}, it is an open question whether it is possible to define GNN architectures "going beyond" message passing, or instead every GNN can be built on message passing over suitably defined graphs.<ref name="velickovic2022" />
Line 12 ⟶ 13:
In the more general subject of "geometric [[deep learning]]", certain existing neural network architectures can be interpreted as GNNs operating on suitably defined graphs.<ref name=bronstein2021 /> A [[convolutional neural network]] layer, in the context of [[computer vision]], can be considered a GNN applied to graphs whose nodes are [[pixel]]s and only adjacent pixels are connected by edges in the graph. A [[Transformer (machine learning model)|transformer]] layer, in [[natural language processing]], can be considered a GNN applied to [[complete graph]]s whose nodes are [[words]] or tokens in a passage of [[natural language]] text.
 
Relevant application domains for GNNs include [[Natural Language Processing|natural language processing]],<ref name="wuchen2023" /> [[social networks]],<ref name="ying2018" /> [[Citation graph|citation networks]],<ref name="stanforddata" /> [[molecular biology]],<ref>{{cite journal |last1=Zhang |first1=Weihang |last2=Cui |first2=Yang |last3=Liu |first3=Bowen |last4=Loza |first4=Martin |last5=Park |first5=Sung-Joon |last6=Nakai |first6=Kenta |date=5 April 2024 |title=HyGAnno: Hybrid graph neural network-based cell type annotation for single-cell ATAC sequencing data |url=https://academic.oup.com/bib/article/25/3/bbae152/7641197 |journal=Briefings in Bioinformatics |volume=25 |issue=3 |pages=bbae152 |doi=10.1093/bib/bbae152|pmid=38581422 |pmc=10998639 }}</ref> chemistry,<ref name="gilmer2017" /><ref>{{Cite journal |last1=Coley |first1=Connor W. |last2=Jin |first2=Wengong |last3=Rogers |first3=Luke |last4=Jamison |first4=Timothy F. |last5=Jaakkola |first5=Tommi S. |last6=Green |first6=William H. |last7=Barzilay |first7=Regina |last8=Jensen |first8=Klavs F. |date=2 January 2019-01-02 |title=A graph-convolutional neural network model for the prediction of chemical reactivity |journal=Chemical Science |language=en |volume=10 |issue=2 |pages=370–377 |doi=10.1039/C8SC04228D |pmid=30746086 |pmc=6335848 |issn=2041-6539|doi-access=free }}</ref> [[physics]]<ref name=qasim2019 /> and [[NP-hard]] [[combinatorial optimization]] problems.<ref name="li2018" />
 
[[Open source]] [[Library (computing)|libraries]] implementing GNNs include PyTorch Geometric<ref name=fey2019 /> ([[PyTorch]]), TensorFlow GNN<ref name=tfgnn2022 /> ([[TensorFlow]]), Deep Graph Library<ref>{{Cite web |last= |title=Deep Graph Library (DGL) |url=https://www.dgl.ai/ |access-date=2024-09-12 September 2024 |website=}}</ref> (framework agnostic), jraph<ref name=jraph2022/> ([[Google JAX]]), and GraphNeuralNetworks.jl<ref name=Lucibello2021GNN/>/GeometricFlux.jl<ref>{{Citation |title=FluxML/GeometricFlux.jl |date=2024-01-31 January 2024 |url=https://github.com/FluxML/GeometricFlux.jl |access-date=3 February 2024-02-03 |publisher=FluxML}}</ref> ([[Julia (programming language)|Julia]], [[Flux (machine-learning framework)|Flux]]).
 
== Architecture ==
Line 85 ⟶ 86:
A GCN can be seen as a special case of a GAT where attention coefficients are not learnable, but fixed and equal to the edge weights <math>w_{uv}</math>.
 
=== Gated graph sequence neural network ===
=== Crypto ===
The gated graph sequence neural network (GGS-NN) was introduced by [[Yujia Li]] et al. in 2015.<ref name=li2016 /> The GGS-NN extends the GNN formulation by Scarselli et al.<ref name=scarselli2009 /> to output sequences. The message passing framework is implemented as an update rule to a [[gated recurrent unit]] (GRU) cell.
 
Line 97 ⟶ 98:
 
== Local pooling layers ==
Local pooling layers coarsen the graph via downsampling. We present hereSubsequently, several learnable local pooling strategies that have been proposed are presented.<ref name=lui2022 /> For each case, the input is the initial graph is represented by a matrix <math>\mathbf{X}</math> of node features, and the graph adjacency matrix <math>\mathbf{A}</math>. The output is the new matrix <math>\mathbf{X}'</math>of node features, and the new graph adjacency matrix <math>\mathbf{A}'</math>.
 
=== Top-k pooling ===
Line 130 ⟶ 131:
 
== Heterophilic Graph Learning ==
[[Homophily]] principle, i.e., nodes with the same labels or similar attributes are more likely to be connected, has been commonly believed to be the main reason for the superiority of Graph Neural Networks (GNNs) over traditional Neural Networks (NNs) on graph-structured data, especially on node-level tasks.<ref name=":0">{{Citationcite |last1=LuanarXiv |first1eprint=Sitao |title=The Heterophilic Graph Learning Handbook: Benchmarks, Models, Theoretical Analysis, Applications and Challenges |date=2024-07-12 |url=https://arxiv.org/abs/2407.09618 |access-datelast1=2025-02-02Luan |arxivfirst1=2407.09618Sitao |last2=Hua |first2=Chenqing |last3=Lu |first3=Qincheng |last4=Ma |first4=Liheng |last5=Wu |first5=Lirong |last6=Wang |first6=Xinyu |last7=Xu |first7=Minkai |last8=Chang |first8=Xiao-Wen |last9=Precup |first9=Doina |last10=Ying |first10=Rex |last11=Li |first11=Stan Z. |last12=Tang |first12=Jian |last13=Wolf |first13=Guy |last14=Jegelka |first14=Stefanie |title=The Heterophilic Graph Learning Handbook: Benchmarks, Models, Theoretical Analysis, Applications and Challenges |date=2024 |class=cs.LG }}</ref> However, recent work has identified a non-trivial set of datasets where GNN’s performance compared to the NN’s is not satisfactory.<ref>{{Cite journalbook |last1=Luan |first1=Sitao |last2=Hua |first2=Chenqing |last3=Lu |first3=Qincheng |last4=Zhu |first4=Jiaqi |last5=Chang |first5=Xiao-Wen |last6=Precup |first6=Doina |chapter=When do We Need Graph Neural Networks for Node Classification? |date=2024 |editor-last=Cherifi |editor-first=Hocine |editor2-last=Rocha |editor2-first=Luis M. |editor3-last=Cherifi |editor3-first=Chantal |editor4-last=Donduran |editor4-first=Murat |title=WhenComplex DoNetworks We& NeedTheir GraphApplications Neural Networks for Node Classification?XII |chapter-url=https://link.springer.com/chapter/10.1007/978-3-031-53468-3_4 |journal=Complex Networks & Their Applications XII |series=Studies in Computational Intelligence |volume=1141 |language=en |___location=Cham |publisher=Springer Nature Switzerland |pages=37–48|doi=10.1007/978-3-031-53468-3_4 |isbn=978-3-031-53467-6 }}</ref> [[Heterophily]], i.e., low homophily, has been considered the main cause of this empirical observation.<ref name=":1">{{Cite journal |last1=Luan |first1=Sitao |last2=Hua |first2=Chenqing |last3=Lu |first3=Qincheng |last4=Zhu |first4=Jiaqi |last5=Zhao |first5=Mingde |last6=Zhang |first6=Shuyuan |last7=Chang |first7=Xiao-Wen |last8=Precup |first8=Doina |date=6 December 2022-12-06 |title=Revisiting Heterophily For Graph Neural Networks |url=https://proceedings.neurips.cc/paper_files/paper/2022/hash/092359ce5cf60a80e882378944bf1be4-Abstract-Conference.html |journal=Advances in Neural Information Processing Systems |language=en |volume=35 |pages=1362–1375|arxiv=2210.07606 }}</ref> People have begun to revisit and re-evaluate most existing graph models in the heterophily scenario across various kinds of graphs, e.g., [[Heterogeneous network|heterogeneous graphs]], [[Temporal network|temporal graphs]] and [[hypergraph]]s. Moreover, numerous graph-related applications are found to be closely related to the heterophily problem, e.g. [[Fraud detection|graph fraud/anomaly detection]], [[Adversarial attack|graph adversarial attacks and robustness]], privacy, [[federated learning]] and [[Point cloud|point cloud segmentation]], [[Clusteringcluster analysis|graph clustering]], [[recommender system]]s, [[generative model]]s, [[link prediction]], [[Graph isomorphism|graph classification]] and [[Graph coloring|coloring]], etc. In the past few years, considerable effort has been devoted to studying and addressing the heterophily issue in graph learning.<ref name=":0" /><ref name=":1" /><ref>{{Cite journal |last1=Luan |first1=Sitao |last2=Hua |first2=Chenqing |last3=Xu |first3=Minkai |last4=Lu |first4=Qincheng |last5=Zhu |first5=Jiaqi |last6=Chang |first6=Xiao-Wen |last7=Fu |first7=Jie |last8=Leskovec |first8=Jure |last9=Precup |first9=Doina |date=2023-12-15 December 2023 |title=When Do Graph Neural Networks Help with Node Classification? Investigating the Homophily Principle on Node Distinguishability |url=https://proceedings.neurips.cc/paper_files/paper/2023/hash/5ba11de4c74548071899cf41dec078bf-Abstract-Conference.html |journal=Advances in Neural Information Processing Systems |language=en |volume=36 |pages=28748–28760}}</ref>
 
== Applications ==
Line 149 ⟶ 150:
=== Cyber security ===
{{See also|Intrusion detection system}}
When viewed as a graph, a network of computers can be analyzed with GNNs for anomaly detection. Anomalies within provenance graphs often correlate to malicious activity within the network. GNNs have been used to identify these anomalies on individual nodes<ref>{{Cite journal |last1=Wang |first1=Su |last2=Wang |first2=Zhiliang |last3=Zhou |first3=Tao |last4=Sun |first4=Hongbin |last5=Yin |first5=Xia |last6=Han |first6=Dongqi |last7=Zhang |first7=Han |last8=Shi |first8=Xingang |last9=Yang |first9=Jiahai |date=2022 |title=Threatrace: Detecting and Tracing Host-Based Threats in Node Level Through Provenance Graph Learning |url=https://ieeexplore.ieee.org/document/9899459/;jsessionid=NzAXdLahhjEX-xmrFzOROk4qxoaz40aJFvKcZRgjck8-zCOucJi7!380715771 |journal=IEEE Transactions on Information Forensics and Security |volume=17 |pages=3972–3987 |doi=10.1109/TIFS.2022.3208815 |issn=1556-6021|arxiv=2111.04333 |bibcode=2022ITIF...17.3972W |s2cid=243847506 }}</ref> and within paths<ref>{{Cite journal |last1=Wang |first1=Qi |last2=Hassan |first2=Wajih Ul |last3=Li |first3=Ding |last4=Jee |first4=Kangkook |last5=Yu |first5=Xiao |date=2020 |title=You Are What You Do: Hunting Stealthy Malware via Data Provenance Analysis. |journal=Network and Distributed Systems Security Symposium|doi=10.14722/ndss.2020.24167 |isbn=978-1-891562-61-7 |s2cid=211267791 |doi-access=free }}</ref> to detect malicious processes, or on the edge level<ref>{{Cite journal |last1=King |first1=Isaiah J. |last2=Huang |first2=H. Howie |date=2022 |title=Euler: Detecting Network Lateral Movement via Scalable Temporal Link Prediction |url=https://www.ndss-symposium.org/wp-content/uploads/2022-107A-paper.pdf |journal=In Proceedings of the 29th Network and Distributed Systems Security Symposium|doi=10.14722/ndss.2022.24107 |s2cid=248221601 }}</ref> to detect [[Network Lateral Movement|lateral movement]].
 
=== Water distribution networks ===
{{See also|Water distribution system}}
 
Water distribution systems can be modelled as graphs, being then a straightforward application of GNN. This kind of algorithm has been applied to water demand forecasting,<ref>{{cite journal |url=https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2022WR032299|title=Graph Convolutional Recurrent Neural Networks for Water Demand Forecasting|last=Zanfei|first=Ariele |display-authors=etal |date=2022|journal=Water Resources Research|volume=58 |issue=7 |article-number=e2022WR032299 |publisher=AGU|doi=10.1029/2022WR032299 |bibcode=2022WRR....5832299Z |access-date=June 11, June 2024}}</ref> interconnecting District Measuring Areas to improve the forecasting capacity. Other application of this algorithm on water distribution modelling is the development of metamodels.<ref>{{cite journal |url=https://www.sciencedirect.com/science/article/abs/pii/S0043135423007005|title=Shall we always use hydraulic models? A graph neural network metamodel for water system calibration and uncertainty assessment|last=Zanfei|first=Ariele |journal=Water Research |display-authors=etal |date=2023|volume=242 |article-number=120264 |doi=10.1016/j.watres.2023.120264 |pmid=37393807 |bibcode=2023WatRe.24220264Z |access-date=June 11, June 2024|url-access=subscription }}</ref>
 
=== Computer Vision ===
{{See also|Computer vision}}
 
To represent an image as a graph structure, the image is first divided into multiple patches, each of which is treated as a node in the graph. Edges are then formed by connecting each node to its nearest neighbors based on spatial or feature similarity. This graph-based representation enables the application of graph learning models to visual tasks. The relational structure helps to enhance feature extraction and improve performance on image understanding.<ref>{{cite arXiv |eprint=2206.00272 |last1=Han |first1=Kai |last2=Wang |first2=Yunhe |last3=Guo |first3=Jianyuan |last4=Tang |first4=Yehui |last5=Wu |first5=Enhua |title=Vision GNN: An Image is Worth Graph of Nodes |date=2022 |class=cs.CV }}</ref>
 
=== Text and NLP ===
{{See also|Natural language processing}}
 
Graph-based representation of text helps to capture deeper semantic relationships between words. Many studies have used graph networks to enhance performance in various text processing tasks such as text classification, question answering, Neural Machine Translation (NMT), event extraction, fact verification, etc.<ref>{{Cite journal |last1=Zhou |first1=Jie |last2=Cui |first2=Ganqu |last3=Hu |first3=Shengding |last4=Zhang |first4=Zhengyan |last5=Yang |first5=Cheng |last6=Liu |first6=Zhiyuan |last7=Wang |first7=Lifeng |last8=Li |first8=Changcheng |last9=Sun |first9=Maosong |date=1 January 2020 |title=Graph neural networks: A review of methods and applications |journal=AI Open |volume=1 |pages=57–81 |doi=10.1016/j.aiopen.2021.01.001 |issn=2666-6510|doi-access=free }}</ref>
 
==References==
Line 161 ⟶ 172:
|url=https://www.nowpublishers.com/article/Details/MAL-096|journal=Foundations and Trends in Machine Learning|volume=16|issue=2|pages=119–328|doi=10.1561/2200000096 |pmid=19068426|s2cid=206756462|issn=1941-0093|arxiv=2106.06090}}</ref>
<ref name="wucuipeizhao2022">{{Cite journal|last1=Wu|first1=Lingfei|last2=Cui|first2=Peng|last3=Pei |first3=Jian|last4=Zhao|first4=Liang|date=2022|title=Graph Neural Networks: Foundations, Frontiers, and Applications|url=https://graph-neural-networks.github.io/|journal=Springer Singapore|pages=725|url-access=<!--WP:URLACCESS-->}}</ref>
<ref name="scarselli2009">{{Cite journal|last1=Scarselli|first1=Franco|last2=Gori|first2=Marco|last3=Tsoi |first3=Ah Chung|last4=Hagenbuchner|first4=Markus|last5=Monfardini|first5=Gabriele|date=2009|title=The Graph Neural Network Model|url=https://ieeexplore.ieee.org/document/4700287|journal=IEEE Transactions on Neural Networks|volume=20|issue=1|pages=61–80|doi=10.1109/TNN.2008.2005605 |pmid=19068426|bibcode=2009ITNN...20...61S |s2cid=206756462|issn=1941-0093}}</ref>
<ref name="micheli2009">{{Cite journal|last1=Micheli|first1=Alessio|title=Neural Network for Graphs: A Contextual Constructive Approach|url=https://ieeexplore.ieee.org/document/4700287|journal=IEEE Transactions on Neural Networks|year=2009 |volume=20|issue=3|pages=498–511|doi=10.1109/TNN.2008.2010350 |pmid=19193509|bibcode=2009ITNN...20..498M |s2cid=17486263|issn=1045-9227}}</ref>
<ref name="sanchez2021">{{Cite journal|last1=Sanchez-Lengeling|first1=Benjamin|last2=Reif|first2=Emily |last3=Pearce|first3=Adam|last4=Wiltschko|first4=Alex|date=2 September 2021-09-02|title=A Gentle Introduction to Graph Neural Networks|url=https://distill.pub/2021/gnn-intro|journal=Distill|volume=6|issue=9|pages=e33 |doi=10.23915/distill.00033|issn=2476-0757|doi-access=free}}</ref>
<ref name="daigavane2021">{{Cite journal|last1=Daigavane|first1=Ameya|last2=Ravindran|first2=Balaraman |last3=Aggarwal|first3=Gaurav|date=2 September 2021-09-02|title=Understanding Convolutions on Graphs |url=https://distill.pub/2021/understanding-gnns|journal=Distill|volume=6|issue=9|pages=e32 |doi=10.23915/distill.00032|s2cid=239678898|issn=2476-0757|doi-access=free}}</ref>
<ref name="gilmer2017">{{Cite journal|last1=Gilmer|first1=Justin|last2=Schoenholz|first2=Samuel S. |last3=Riley|first3=Patrick F.|last4=Vinyals|first4=Oriol|last5=Dahl|first5=George E.|date=2017-07-17 July 2017|title=Neural Message Passing for Quantum Chemistry|url=http://proceedings.mlr.press/v70/gilmer17a.html |journal=Proceedings of Machine Learning Research|language=en|pages=1263–1272|arxiv=1704.01212}}</ref>
<ref name="kipf2016">{{Cite journal|last1=Kipf|first1=Thomas N|last2=Welling|first2=Max|date=2016 |title=Semi-supervised classification with graph convolutional networks|journal=IEEE Transactions on Neural Networks |url=https://ieeexplore.ieee.org/document/4700287 |volume=5|issue=1|pages=61–80 |doi=10.1109/TNN.2008.2005605|pmid=19068426|arxiv=1609.02907|bibcode=2009ITNN...20...61S |s2cid=206756462}}</ref>
<ref name="hamilton2017">{{Cite journal|last1=Hamilton|first1=William|last2=Ying|first2=Rex |last3=Leskovec|first3=Jure|date=2017|title=Inductive Representation Learning on Large Graphs|url=https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf|journal=Neural Information Processing Systems|volume=31|arxiv=1706.02216|via=Stanford}}</ref>
<ref name="velickovic2018">{{Cite arXiv|last1=Veličković|first1=Petar|last2=Cucurull|first2=Guillem |last3=Casanova|first3=Arantxa|last4=Romero|first4=Adriana|last5=Liò|first5=Pietro|last6=Bengio |first6=Yoshua|date=4 February 2018-02-04 |title=Graph Attention Networks|eprint=1710.10903 |class=stat.ML}}</ref>
<ref name=stanforddata>{{Cite web|title=Stanford Large Network Dataset Collection |url=https://snap.stanford.edu/data/|access-date=5 July 2021-07-05|website=snap.stanford.edu}}</ref>
<ref name="li2018">{{cite journalbook |last1=Li |first1=Zhuwen |last2=Chen |first2=Qifeng |last3=Koltun |first3=Vladlen |title=CombinatorialNeural optimizationInformation withProcessing graph|chapter=Text convolutionalSimplification networkswith andSelf-Attention-Based guided treePointer-Generator searchNetworks |journalseries=NeuralLecture InformationNotes Processingin SystemsComputer Science |date=2018 |volume=31 |pages=537–546 |doi=10.1007/978-3-030-04221-9_48 |arxiv=1810.10659 |isbn=978-3-030-04220-2 }}</ref>
<ref name="bronstein2021">{{cite arXiv |last1=Bronstein |first1=Michael M. |last2=Bruna |first2=Joan |last3=Cohen |first3=Taco |last4=Veličković |first4=Petar |title=Geometric Deep Learning: Grids, Groups, Graphs Geodesics and Gauges |date=May 4, May 2021 |class=cs.LG |eprint=2104.13478}}</ref>
<ref name=douglas2011>{{cite arXiv|last=Douglas|first=B. L.|date=2011-01-27 January 2011|title=The Weisfeiler–Lehman Method and Graph Isomorphism Testing|class=math.CO|eprint=1101.5211}}</ref>
<ref name=xu2019>{{Cite arXiv|last1=Xu|first1=Keyulu|last2=Hu|first2=Weihua|last3=Leskovec|first3=Jure |last4=Jegelka|first4=Stefanie|author4-link=Stefanie Jegelka|date=2019-02-22 February 2019|title=How Powerful are Graph Neural Networks? |eprint=1810.00826 |class=cs.LG}}</ref>
<ref name=velickovic2022>{{cite arXiv |last1=Veličković |first1=Petar |title=Message passing all the way up |year=2022 |class=cs.LG |eprint=2202.11097}}</ref>
<ref name=qasim2019>{{cite journal |last1=Qasim |first1=Shah Rukh |last2=Kieseler |first2=Jan |last3=Iiyama |first3=Yutaro |last4=Pierini |first4=Maurizio Pierini |title=Learning representations of irregular particle-detector geometry with distance-weighted graph networks |journal=The European Physical Journal C |date=2019 |volume=79 |issue=7 |page=608 |doi=10.1140/epjc/s10052-019-7113-9|s2cid=88518244 |doi-access=free |arxiv=1902.07987 |bibcode=2019EPJC...79..608Q }}</ref>
Line 197 ⟶ 208:
<ref name=grady2011discrete>{{cite book |last1=Grady |first1=Leo |last2=Polimeni |first2=Jonathan |title=Discrete Calculus: Applied Analysis on Graphs for Computational Science |url=http://leogrady.net/wp-content/uploads/2017/01/grady2010discrete.pdf |date=2011 |publisher=Springer }}</ref>
<ref name=xu2018>{{cite arXiv |last1=Xu |first1=Keyulu |last2=Li |first2=Chengtao |last3=Tian |first3=Yonglong |last4=Sonobe |first4=Tomohiro |last5=Kawarabayashi |first5=Ken-ichi |last6=Jegelka |first6=Stefanie|author6-link=Stefanie Jegelka |title=Representation Learning on Graphs with Jumping Knowledge Networks |date=2018 |class=cs.LG |eprint=1806.03536}}</ref>
<ref name=Lucibello2021GNN>{{cite web |last=Lucibello |first=Carlo |title=GraphNeuralNetworks.jl |website=[[GitHub]] |url=https://github.com/CarloLucibello/GraphNeuralNetworks.jl |year=2021 |access-date=2023-09-21 September 2023}}</ref>
}}
 
Line 211 ⟶ 222:
[[Category:Artificial neural networks]]
[[Category:Graph algorithms]]
[[Category:2009 in artificial intelligence]]