Probabilistic neural network

This is an old revision of this page, as edited by Stausifr (talk | contribs) at 08:48, 22 March 2012 (Stausifr moved page Wikipedia talk:Articles for creation/Probabilistic Neural Network to Probabilistic Neural Network: Created via Articles for Creation (you can help!)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A Probabilistic Neural Network (PNN) is a Feedforward neural network , which was derived from Bayesian network[1] and a statistical algorithm called Kernel Fisher discriminant analysis[2]. It was introduced by D.F. Specht in the early 1990s. In a PNN, the operations are organized into a multilayered feedforward network with four layers:

  • Input layer
  • Hidden layer
  • Pattern layer/Summation layer
  • Output layer

Layers of PNN

Input layer

Each neuron in the input layer represents a predictor variable. In categorical variables, N-1 neurons are used when there are N number of categories. It standardizes the range of the values by subtracting the median and dividing by the interquartile range.Then the input neurons feed the values to each of the neurons in the hidden layer.

Hidden layer

This layer contains one neuron for each case in the training data set. It stores the values of the predictor variables for the case along with the target value. a hidden neuron computes the Euclidean distance of the test case from the neuron’s center point and then applies the RBF kernel function using the sigma values.

Pattern layer

For PNN networks there is one pattern neuron for each category of the target variable. The actual target category of each training case is stored with each hidden neuron; the weighted value coming out of a hidden neuron is fed only to the pattern neuron that corresponds to the hidden neuron’s category. The pattern neurons add the values for the class they represent

PNN often use in classification problems[3].When an Input is present, first layer computes the distance from the input vector to the training input vectors. It produce a vector where its elements indicate how close the input is to training input. The second layer sums the contribution for each class of inputs and produce it's net output as a vector of probabilities.Finally, a compete transfer function on the output of the second layer picks the maximum of these probabilities, and produces a 1 for that class and a 0 for the other classes.

Output layer

The output layer compares the weighted votes for each target category accumulated in the pattern layer and uses the largest vote to predict the target category

Advantages

There are several advantages and disadvantages using PNN instead of multilayer perceptron[4]

  • PNN is much faster compare to multilayer perceptron networks.
  • PNN are more accurate than multilayer perceptron networks.
  • PNN are networks are relatively insensitive to outliers.
  • PNN networks generate accurate predicted target probability scores.
  • PNN approach Bayes optimal classification.
  • Bulleted list item

Disadvantages

  • PNN are slower than multilayer perceptron networks at classifying new cases.
  • PNN require more memory space to store the model

References