Graph neural network: Difference between revisions

Content deleted Content added
Finish pipeline of a GNN model
basic property of spectral approaches
Line 11:
 
* <math>V</math>, a [[Set (mathematics)|set]] of ''vertices'' (also called ''nodes'' or ''points'');
* <math>E</math>, a [[Set (mathematics)|set]] of ''edges'' (either directed or undirected, also called ''links'' or ''lines'').;
*<math>A</math>, the given graph's [[adjacency matrix]];
*<math>D</math>, the given graph's [[degree matrix]].
 
If the input data is already in graph structure, then this step is done. Otherwise, you need to observe the data first and reorganize it to be a graph according to your requirement, while not destroying the data's property (so that your model won't face the [[Garbage in, garbage out|"garbage in, garbage out"]] problem).
Line 27 ⟶ 29:
* Sampling module: when a graph is too large, sampling modules are needed for computation preservation.
* Pooling module: when higher level information (sub-graph-level, graph-level) is needed for the task, pooling modules are needed to aggregate low-level information and provide hierarchical propagation.
 
== Architecture ==
 
=== Convolution based methods ===
The main idea of this type of methods is to generalize [[Convolutional neural network|standard CNN]] or [[Attention (machine learning)|attentional methods]] to graph structured data, and that is, to define some [[receptive fields]] with respect to given nodes and propagate information within. Based on the operation ___domain, we can further divide these methods into two groups:
 
==== Spectral approaches ====
For spectral approaches, a graph signal <math>x</math> is first transformed to spectral ___domain by the [[graph Fourier Transform]] <math>\mathcal{F}</math>, then the convolution operation can be conducted. After that, we can transform the result back to the original ___domain (spatial ___domain) by the [[Graph Fourier Transform|inverse graph Fourier Transform]] <math>\mathcal{F^{-1}}</math>. <math>\mathcal{F}</math> and <math>\mathcal{F^{-1}}</math> are defined as:
 
* <math>\mathcal{F}(x) = U^{T}x</math>
* <math>\mathcal{F^{-1}}(x) = Ux</math>,
 
where <math>U</math> is the matrix of eigenvectors of the [[Graph Laplacian|symmetric normalized graph Laplacian]] <math>L^\text{sym}</math>. <math>L^\text{sym}</math> is denoted as:
 
<math>L^\text{sym} := D^{-\frac{1}{2}} L D^{-\frac{1}{2}} = I - D^{-\frac{1}{2}} A D^{-\frac{1}{2}} = U\Lambda U^{T}</math> .
 
 
== References ==