Content deleted Content added
mNo edit summary |
RNN based methods |
||
Line 91:
Graphs are constructed by kNN. 3D-GCN designs deformable 3D kernels as each kernel has one center kernel point <math>k_C\in \mathbb{R}^3</math> and several support points <math>k_1, k_2, ... k_S\in \mathbb{R}^3</math>. Given a data point <math>p_n\in \mathbb{R}^3</math>and its neighbor points <math>p_1, p_2, ... p_M \in \mathbb{R}^3</math>, the convolution is operated by taking the direction vector of the center point to each neighbor <math>p_m - p_n</math> and the direction vector of the center kernel point to each support <math>k_s - k_C = k_s</math>(since <math>k_C</math> is set to be <math>(0,0,0)</math>), calculate their [[cosine similarity]], and then map this similarity to feature space by another learnable parameters <math>\mathcal{w}</math>. Since the convolution is calculated by cosine similarity instead of exact coordinate, 3D-GCN better captures an 3D object's geometry instead of ___location, and is totally '''shift and scale invariant.''' Similar to KC-Net, 3D-GCN also design a graph max-pooling to explore multi-resolution information, while preserving the largest activation.
=== Recurrent based methods<ref>{{Cite journal|last=Li|first=Yujia|last2=Tarlow|first2=Daniel|last3=Brockschmidt|first3=Marc|last4=Zemel|first4=Richard|date=2017-09-22|title=Gated Graph Sequence Neural Networks|url=http://arxiv.org/abs/1511.05493|journal=arXiv:1511.05493 [cs, stat]}}</ref><ref>{{Cite journal|last=Tai|first=Kai Sheng|last2=Socher|first2=Richard|last3=Manning|first3=Christopher D.|title=Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks|url=https://aclanthology.org/P15-1150|journal=|language=en-us|pages=1556–1566|doi=10.3115/v1/P15-1150}}</ref><ref>{{Cite journal|last=Zayats|first=Victoria|last2=Ostendorf|first2=Mari|date=2018|title=Conversation Modeling on Reddit Using a Graph-Structured LSTM|url=https://aclanthology.org/Q18-1009|journal=Transactions of the Association for Computational Linguistics|language=en-us|volume=6|pages=121–132|doi=10.1162/tacl_a_00009}}</ref> ===
Different from convolution based methods which learns different weight at each layer, recurrent based methods tend to use same shared module and recursively update node or edge information, borrowing some [[Recurrent neural network|RNN]] approaches such as GRU, LSTM, etc.
== References ==
|