Graph kernel: Difference between revisions

Content deleted Content added
Pcktm (talk | contribs)
fix author name
m Example Kernels: internal link
 
(4 intermediate revisions by 3 users not shown)
Line 1:
{{About|machine learning|the graph-theoretical notion|Glossary of graph theory}}
 
In [[structure mining]], a '''graph kernel''' is a [[Positive-definite kernel|kernel function]] that computes an [[inner product space|inner product]] on [[Graph (abstract data type)|graphs]].<ref name="Vishwanathan">{{cite journal|title=Graph kernels|author1=S.V. N. Vishwanathan|author2=Nicol N. Schraudolph|author3=Risi Kondor|author4=Karsten M. Borgwardt|author4-link=Karsten Borgwardt|journal=[[Journal of Machine Learning Research]]|volume=11|pages=1201–1242|year=2010|url=http://jmlr.csail.mit.edu/papers/volume11/vishwanathan10a/vishwanathan10a.pdf}}</ref>
Graph kernels can be intuitively understood as functions measuring the similarity of pairs of graphs. They allow [[Kernel trick|kernelized]] learning algorithms such as [[support vector machine]]s to work directly on graphs, without having to do [[feature extraction]] to transform them to fixed-length, real-valued [[feature vector]]s. They find applications in [[bioinformatics]], in [[chemoinformatics]] (as a type of [[molecule kernel]]s<ref name="Ralaivola2005">{{cite journal |author1=L. Ralaivola |author2=S. J. Swamidass |author3=H. Saigo |author4=P. Baldi |title=Graph kernels for chemical informatics |journal=Neural Networks |year=2005 |volume=18 |issue=8 |pages=1093–1110 |doi=10.1016/j.neunet.2005.07.009|pmid=16157471 }}</ref>), and in [[social network analysis]].<ref name="Vishwanathan"/>
 
Line 53:
An example of a kernel between graphs is the '''random walk kernel''',<ref name="Gaertner"/><ref name="Kashima"/> which conceptually performs [[random walk]]s on two graphs simultaneously, then counts the number of [[Path (graph theory)|path]]s that were produced by ''both'' walks. This is equivalent to doing random walks on the [[Tensor product of graphs|direct product]] of the pair of graphs, and from this, a kernel can be derived that can be efficiently computed.<ref name="Vishwanathan"/>
 
Another examples is the '''Weisfeiler-Leman graph kernel'''<ref>Shervashidze, Nino, et al. "Weisfeiler-lehman graph kernels." Journal of Machine Learning Research 12.9 (2011).</ref> which computes multiple rounds of the [[Weisfeiler-Leman algorithm]] and then computes the similarity of two graphs as the inner product of the histogram vectors of both graphs. In those histogram vectors the kernel collects the number of times a color occurs in the graph in every iteration.
Note that the Weisfeiler-Leman kernel in theory has an infinite dimension as the number of possible colors assigned by the Weisfeiler-Leman algorithm is infinite. By restricting to the colors that occur in both graphs, the computation is still feasible.
 
Line 65:
[[Category:Graph algorithms]]
[[Category:Kernel methods for machine learning]]
 
{{comp-sci-stub}}
 
{{machine-learning-stub}}