Hyper basis function network: Difference between revisions

Content deleted Content added
Tr00rle (talk | contribs)
No edit summary
Yobot (talk | contribs)
m WP:CHECKWIKI error fixes, added orphan tag using AWB (10538)
Line 1:
{{Orphan|date=December 2014}}
In [[machine learning]], a '''Hyper basis function network''', or '''HyperBF network''', is a generalization of [[Radial basis function network|radial basis function (RBF) networks]] concept, where the [[Mahalanobis distance|Mahalanobis]]-like distance is used instead of Euclidian distance measure. Hyper basis function networks were first introduced by Poggio and Girosi in the 1990 paper “Networks for Approximation and Learning” <ref name="PoggioGirosi1990"> T. Poggio and F. Girosi (1990). "Networks for Approximation and Learning". ''Proc. of the IEEE'' '''Vol. 78, No. 9''':1481-1497.</ref><ref name="Mahdi">R.N. Mahdi, E.C. Rouchka (2011). [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5733426 "Reduced HyperBF Networks: Regularization by Explicit Complexity Reduction and Scaled Rprop-Based Training"]. ''IEEE Transactions of Neural Networks'' '''2''':673–686.</ref>.
 
In [[machine learning]], a '''Hyper basis function network''', or '''HyperBF network''', is a generalization of [[Radial basis function network|radial basis function (RBF) networks]] concept, where the [[Mahalanobis distance|Mahalanobis]]-like distance is used instead of Euclidian distance measure. Hyper basis function networks were first introduced by Poggio and Girosi in the 1990 paper “Networks for Approximation and Learning” .<ref name="PoggioGirosi1990"> T. Poggio and F. Girosi (1990). "Networks for Approximation and Learning". ''Proc. of the IEEE'' '''Vol. 78, No. 9''':1481-1497.</ref><ref name="Mahdi">R.N. Mahdi, E.C. Rouchka (2011). [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5733426 "Reduced HyperBF Networks: Regularization by Explicit Complexity Reduction and Scaled Rprop-Based Training"]. ''IEEE Transactions of Neural Networks'' '''2''':673–686.</ref>.
 
==Network Architecture==
Line 27 ⟶ 29:
where <math>\omega</math> determines the rate of convergence.
 
Overall, training HyperBF networks can be computationally challenging. Moreover, the high degree of freedom of HyperBF leads to overfitting and poor generalization. However, HyperBF networks have an important advantage that a small number of neurons is enough for learning complex functions.<ref name="Mahdi">R.N. Mahdi, E.C. Rouchka (2011). [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5733426 "Reduced HyperBF Networks: Regularization by Explicit Complexity Reduction and Scaled Rprop-Based Training"]. ''IEEE Transactions of Neural Networks'' '''2''':673–686.</ref>
 
==References==