Neural modeling fields: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Line 21:
 
:<big>L({'''X'''},{'''M'''}) = &prod;<sub>n=1..N</sub> l('''X'''(n)).</big>
 
 
This expression contains a product of partial similarities, l('''X'''(n)), over all [[bottom-up signals (bottom-up neural signals)|bottom-up signals]] ; therefore it forces the mind to account for every signal (even if one term in the product is zero, the product is zero, the similarity is low and the [[knowledge instinct]] is not satisfied); this is a reflection of the first principle. Second, before perception occurs, the [[mind]] does not know which object gave rise to a signal from a particular retinal neuron. Therefore a partial similarity measure is constructed so that it treats each model as an alternative (a sum over [[concept-models]]) for each input neuron signal. Its constituent elements are conditional partial similarities between signal '''X'''(n) and model '''M<sub>m</sub>''', l('''X'''(n)|m). This measure is “conditional” on object m being present (Perlovsky 2001), therefore, when combining these quantities into the overall similarity measure, L, they are multiplied by r(m), which represent a probabilistic measure of object m actually being present. Combining these elements with the two principles noted above, a similarity measure is constructed as follows:
 
 
:<big> L({'''X'''},{'''M'''}) = &prod;<sub>n=1..N</sub> &sum;<sub>m=1..M</sub> r(m) l('''X'''(n) | m). </big>
 
 
The structure of the expression above follows standard principles of the probability theory: a summation is taken over alternatives, m, and various pieces of evidence, n, are multiplied. This expression is not necessarily a probability, but it has a probabilistic structure. If learning is successful, it approximates probabilistic description and leads to near-optimal Bayesian decisions. The name “conditional partial similarity” for l('''X'''(n)|m) (or simply l(n|m)) follows the probabilistic terminology. If learning is successful, l(n|m) becomes a conditional probability density function, a probabilistic measure that signal in neuron n originated from object m. Then L is a total likelihood of observing signals {'''X'''(n)} coming from objects described by concept-model {'''M<sub>m</sub>'''}. Coefficients r(m), called priors in probability theory, contain preliminary biases or expectations, expected objects m have relatively high r(m) values; their true values are usually unknown and should be learned, like other parameters '''S<sub>m</sub>'''.
 
==References==