Neural modeling fields: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Line 30:
 
The structure of the expression above follows standard principles of the probability theory: a summation is taken over alternatives, m, and various pieces of evidence, n, are multiplied. This expression is not necessarily a probability, but it has a probabilistic structure. If learning is successful, it approximates probabilistic description and leads to near-optimal Bayesian decisions. The name “conditional partial similarity” for l('''X'''(n)|m) (or simply l(n|m)) follows the probabilistic terminology. If learning is successful, l(n|m) becomes a conditional probability density function, a probabilistic measure that signal in neuron n originated from object m. Then L is a total likelihood of observing signals {'''X'''(n)} coming from objects described by concept-model {'''M<sub>m</sub>'''}. Coefficients r(m), called priors in probability theory, contain preliminary biases or expectations, expected objects m have relatively high r(m) values; their true values are usually unknown and should be learned, like other parameters '''S<sub>m</sub>'''.
 
We note that in probability theory, a product of probabilities usually assumes that evidence is independent. Expression for L contains a product over n, but it does not assume independence among various signals '''X'''(n). There is a dependence among signals due to [[(concept-models)]]: each model '''M<sub>m</sub>'''('''S<sub>m</sub>''',n) predicts expected signal values in many neurons n.
 
During the learning process, [[concept-models]] are constantly modified. In this review we consider a case when functional forms of [[models (concept-models)|models]] , '''M<sub>m</sub>'''('''S<sub>m</sub>''',n), are all fixed and learning-adaptation involves only model parameters, '''S<sub>m</sub>'''.
 
==References==