Neural modeling fields: Difference between revisions

Content deleted Content added
Romanilin (talk | contribs)
Romanilin (talk | contribs)
Line 20:
 
 
Top-down, or priming signals to these neurons are sent by concept-models, '''M'''<sub>m</sub>('''S'''<sub>m</sub>,n) enumerated

:<math> by\vec indexM_m(\vec S_m, n), m = 1,2..M.</math>

,where M is the number of models. Each model is characterized by its parameters, '''S<sub>m</sub>'''; in the neuron structure of the brain they are encoded by strength of synaptic connections, mathematically, they are given by a set of numbers,
 
:<math> \vec S_m = \{ S_m^a \}, a = 1..A.</math>
Line 35 ⟶ 39:
Therefore, the similarity measure is constructed so that it accounts for all bottom-up signals, ''X''(''n''),
 
:<bigmath> L( \{'''\vec X'''(n)\}, \{'''M'''\vec M_m( \vec S_m, n)\} ) = &prod;<sub>\prod_{n=1..}^N</sub> {l('''\vec X'''(''n''))}.</bigmath> &nbsp;&nbsp;&nbsp; (1)
 
This expression contains a product of partial similarities, l('''X'''(n)), over all bottom-up signals; therefore it forces the NMF system to account for every signal (even if one term in the product is zero, the product is zero, the similarity is low and the knowledge instinct is not satisfied); this is a reflection of the first principle. Second, before perception occurs, the mind does not know which object gave rise to a signal from a particular retinal neuron. Therefore a partial similarity measure is constructed so that it treats each model as an alternative (a sum over concept-models) for each input neuron signal. Its constituent elements are conditional partial similarities between signal '''X'''(n) and model '''M<sub>m</sub>''', l('''X'''(n)|m). This measure is “conditional” on object m being present, therefore, when combining these quantities into the overall similarity measure, L, they are multiplied by r(m), which represent a probabilistic measure of object m actually being present. Combining these elements with the two principles noted above, a similarity measure is constructed as follows:
 
:<bigmath> L( \{'''\vec X'''(n)\}, \{'''M'''\vec M_m( \vec S_m, n)\} ) = &prod;<sub>\prod_{n=1..}^N</sub>{ &sum;<sub>\sum_{m=1..}^M</sub> { r(m) l('''\vec X'''(n) | m). } }.</bigmath> &nbsp;&nbsp;&nbsp; (2)
 
The structure of the expression above follows standard principles of the probability theory: a summation is taken over alternatives, m, and various pieces of evidence, n, are multiplied. This expression is not necessarily a probability, but it has a probabilistic structure. If learning is successful, it approximates probabilistic description and leads to near-optimal Bayesian decisions. The name “conditional partial similarity” for l('''X'''(n)|m) (or simply l(n|m)) follows the probabilistic terminology. If learning is successful, l(n|m) becomes a conditional probability density function, a probabilistic measure that signal in neuron n originated from object m. Then L is a total likelihood of observing signals {'''X'''(n)} coming from objects described by concept-model {'''M<sub>m</sub>'''}. Coefficients r(m), called priors in probability theory, contain preliminary biases or expectations, expected objects m have relatively high r(m) values; their true values are usually unknown and should be learned, like other parameters '''S<sub>m</sub>'''.