Content deleted Content added
m edit see also - minor |
Application added |
||
Line 160:
To represent an image as a graph structure, the image is first divided into multiple patches, each of which is treated as a node in the graph. Edges are then formed by connecting each node to its nearest neighbors based on spatial or feature similarity. This graph-based representation enables the application of graph learning models to visual tasks. The relational structure helps to enhance feature extraction and improve performance on image understanding.<ref>{{Citation |last=Han |first=Kai |title=Vision GNN: An Image is Worth Graph of Nodes |date=2022-11-04 |url=http://arxiv.org/abs/2206.00272 |access-date=2025-06-03 |publisher=arXiv |doi=10.48550/arXiv.2206.00272 |id=arXiv:2206.00272 |last2=Wang |first2=Yunhe |last3=Guo |first3=Jianyuan |last4=Tang |first4=Yehui |last5=Wu |first5=Enhua}}</ref>
=== Text and NLP ===
{{See also|Natural language processing}}
Graph-based representation of text helps to capture deeper semantic relationships between words. Many studies have used graph networks to enhance performance in various text processing tasks such as text classification, question answering, Neural Machine Translation (NMT), event extraction, fact verification, etc.<ref>{{Cite journal |last=Zhou |first=Jie |last2=Cui |first2=Ganqu |last3=Hu |first3=Shengding |last4=Zhang |first4=Zhengyan |last5=Yang |first5=Cheng |last6=Liu |first6=Zhiyuan |last7=Wang |first7=Lifeng |last8=Li |first8=Changcheng |last9=Sun |first9=Maosong |date=2020-01-01 |title=Graph neural networks: A review of methods and applications |url=https://www.sciencedirect.com/science/article/pii/S2666651021000012 |journal=AI Open |volume=1 |pages=57–81 |doi=10.1016/j.aiopen.2021.01.001 |issn=2666-6510}}</ref>
==References==
|