Radial basis function network: Difference between revisions

Content deleted Content added
Alter: pages. Add: journal, s2cid. Removed URL that duplicated unique identifier. Removed parameters. Formatted dashes. | You can use this tool yourself. Report bugs here. | via #UCB_Gadget
Tag: Reverted
Line 27:
[[Image:Radial funktion network.svg|thumb|250px|right|Figure 1: Architecture of a radial basis function network. An input vector <math>x</math> is used as input to all radial basis functions, each with different parameters. The output of the network is a linear combination of the outputs from radial basis functions.]]
 
Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer <ref>{{cite journal | last1 = Franco | first1 = D. | last2 = Steiner | first2 = M. | year = 2018 | title = New Strategies for Initialization and Training of Radial Basis Function Neural Networks | url=https://drive.google.com/file/d/1JYWcm1tzXd2vPJlz92xrpqcNx76i_UnS/view?usp=sharing | journal = IEEE Latin America Transactions | volume = 15 | issue = 6 | pages = 1182-1188 | doi = 10.1109/TLA.2017.7932707 }}</ref>. The input can be modeled as a vector of real numbers <math>\mathbf{x} \in \mathbb{R}^n</math>. The output of the network is then a scalar function of the input vector, <math> \varphi : \mathbb{R}^n \to \mathbb{R} </math>, and is given by
 
:<math>\varphi(\mathbf{x}) = \sum_{i=1}^N a_i \rho(||\mathbf{x}-\mathbf{c}_i||)</math>