Probabilistic neural network

This is an old revision of this page, as edited by 192.248.16.90 (talk) at 08:10, 22 March 2012 (Advantages and disadvantages). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A Probabilistic Neural Network (PNN) is a Feedforward neural network , which was derived from Bayesian network[1] and a statistical algorithm called Kernel Fisher discriminant analysis[2]. It was introduced by D.F. Specht in the early 1990s. In a PNN, the operations are organized into a multilayered feedforward network with four layers:

  • Input layer
  • Pattern layer
  • Summation layer
  • Output layer

PNN often use in classification problems[3].When an Input is present, first layer computes the distance from the input vector to the training input vectors. It produce a vector where its elements indicate how close the input is to training input. The second layer sums the contribution for each class of inputs and produce it's net output as a vector of probabilities.Finally, a compete transfer function on the output of the second layer picks the maximum of these probabilities, and produces a 1 for that class and a 0 for the other classes.

Advantages

There are several advantages and disadvantages using PNN instead of multilayer perceptron[4]

  • PNN is much faster compare to multilayer perceptron networks.
  • PNN are more accurate than multilayer perceptron networks.
  • PNN are networks are relatively insensitive to outliers.
  • PNN networks generate accurate predicted target probability scores.
  • PNN approach Bayes optimal classification.
  • Bulleted list item

disadvantages

  • PNN are slower than multilayer perceptron networks at classifying new cases.
  • PNN require more memory space to store the model

References