Node2vec: Difference between revisions

Content deleted Content added
this was not up to date. predictive power compared to what? gnns outperform it
Tag: references removed
m typo fix: added comma
 
Line 1:
'''node2vec''' is an algorithm to generate vector representations of nodes on a [[Graph theory|graph.]] The ''node2vec'' framework learns low-dimensional representations for nodes in a graph through the use of [[random walk]]s through a graph starting at a target node. It is useful for a variety of [[machine learning]] applications. ''node2vec'' follows the intuition that random walks through a graph can be treated like sentences in a corpus. Each node in a graph is treated like an individual word, and a random walk is treated as a sentence. By feeding these "sentences" into a [[N-gram|skip-gram]], or by using the [[Bag-of-words model|continuous bag of words]] model, paths found by random walks can be treated as sentences, and traditional data-mining techniques for documents can be used. The algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and argues that the added flexibility in exploring neighborhoods is the key to learning richer representations of nodes in graphs.<ref>{{cite book|last1=Grover|first1=Aditya|last2=Leskovec|first2=Jure|title=Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining |chapter=Node2vec |year=2016|volume=2016|pages=855–864|doi=10.1145/2939672.2939754|pmid=27853626|pmc=5108654|arxiv=1607.00653|bibcode=2016arXiv160700653G|isbn=9781450342322 }}</ref>
The algorithm is considered one of the best graph classifiers.<ref>{{cite journal|arxiv=1903.07902|doi=10.1109/tkde.2019.2951398|title=A Comparative Study for Unsupervised Network Representation Learning|year=2020|last1=Khosla|first1=Megha|last2=Setty|first2=Vinay|last3=Anand|first3=Avishek|journal=IEEE Transactions on Knowledge and Data Engineering|page=1|s2cid=207870054}}</ref>