Content deleted Content added
CAPTAIN RAJU (talk | contribs) m clean up |
Citation bot (talk | contribs) Alter: journal, pages. Add: doi, bibcode, arxiv, pmid. Removed URL that duplicated unique identifier. Removed parameters. | You can use this bot yourself. Report bugs here. | Activated by Headbomb | via #UCB_webform |
||
Line 1:
'''node2vec''' is an algorithm to generate vector representations of nodes on a [[Graph theory|graph.]] The ''node2vec'' framework learns low-dimensional representations for nodes in a graph through the use of [[random walk]]s through a graph starting at a target node. It is useful for a variety of [[machine learning]] applications. Besides reducing the engineering effort, representations learned by the algorithm lead to greater predictive power.<ref>{{Cite web|url=https://snap.stanford.edu/node2vec/|title=node2vec: Scalable Feature Learning for Networks|last=|first=|date=|website=|url-status=live|archive-url=|archive-date=|access-date=}}</ref> ''node2vec'' follows the intuition that random walks through a graph can be treated like sentences in a corpus. Each node in a graph is treated like an individual word, and a random walk is treated as a sentence. By feeding these "sentences" into a [[N-gram|skip-gram]], or by using the [[Bag-of-words model|continuous bag of words]] model paths found by random walks can be treated as sentences, and traditional data-mining techniques for documents can be used. The algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and argues that the added flexibility in exploring neighborhoods is the key to learning richer representations of nodes in graphs.<ref>{{Cite journal|last=Grover|first=Aditya|last2=Leskovec|first2=Jure|date=2016|title=node2vec: Scalable Feature Learning for Networks
==References==
|