Probabilistic neural network: Difference between revisions

Content deleted Content added
ce
Line 1:
A '''probabilistic neural network (PNN)''' <ref name="pnn-book">{{cite book |last1=Mohebali |first1=Behshad |last2=Tahmassebi |first2=Amirhessam |last3=Meyer-Baese |first3=Anke |last4=Gandomi |first4=Amir H. |title=Probabilistic neural networks: a brief overview of theory, implementation, and application |date=2020 |publisher=Elsevier |pages=347–367 |doi=10.1016/B978-0-12-816514-0.00014-X |s2cid=208119250 }}</ref> is a [[feedforward neural network]], which is widely used in classification and pattern recognition problems. In the PNN algorithm, the parent probability distribution function (PDF) of each class is approximated by a [[Kernel density estimation|Parzen window]] and a non-parametric function. Then, using PDF of each class, the class probability of a new input data is estimated and Bayes’ rule is then employed to allocate the class with highest posterior probability to new input data. By this method, the probability of mis-classification is minimized.<ref>{{Cite journal|url=https://www.researchgate.net/publication/312519997|title=Competitive probabilistic neural network|year=2017|doi=10.3233/ICA-170540|last1=Zeinali|first1=Yasha|last2=Story|first2=Brett A.|journal=Integrated Computer-Aided Engineering|volume=24|issue=2|pages=105–118}}</ref> This type of [[artificial neural network]] (ANN) was derived from the [[Bayesian network]]<ref>{{cite web |url=http://herselfsai.com/2007/03/probabilistic-neural-networks.html |title=Probabilistic Neural Networks |access-date=2012-03-22 |url-status=dead |archive-url=https://web.archive.org/web/20101218121158/http://herselfsai.com/2007/03/probabilistic-neural-networks.html |archive-date=2010-12-18 }}</ref> and a statistical algorithm called [[Kernel Fisher discriminant analysis]].<ref>{{cite web |url=http://www.psi.toronto.edu/~vincent/research/presentations/PNN.pdf |title=Archived copy |access-date=2012-03-22 |url-status=dead |archive-url=https://web.archive.org/web/20120131053940/http://www.psi.toronto.edu/~vincent/research/presentations/PNN.pdf |archive-date=2012-01-31 }}</ref> It was introduced by D.F. Specht in 1966.<ref>{{Cite journal|last=Specht|first=D. F.|date=1967-06-01|title=Generation of Polynomial Discriminant Functions for Pattern Recognition|journal=IEEE Transactions on Electronic Computers|volume=EC-16|issue=3|pages=308–319|doi=10.1109/PGEC.1967.264667|issn=0367-7508}}</ref><ref name=Specht1990>{{Cite journal | last1 = Specht | first1 = D. F. | doi = 10.1016/0893-6080(90)90049-Q | title = Probabilistic neural networks | journal = Neural Networks | volume = 3 | pages = 109–118 | year = 1990 }}</ref> In a PNN, the operations are organized into a multilayered feedforward network with four layers:
* Input layer
* Pattern layer
Line 13:
 
===Pattern layer===
This layer contains one neuron for each case in the training data set. It stores the values of the predictor variables for the case along with the target value. A hidden neuron computes the [[Euclidean distance]] of the test case from the neuron’sneuron's center point and then applies the [[radial basis function]] kernel function]] using the sigma values.
 
===Summation layer===