Content deleted Content added
No edit summary |
No edit summary |
||
Line 22:
:<big>L({'''X'''},{'''M'''}) = ∏<sub>n=1..N</sub> l('''X'''(n)).</big>
This expression contains a product of partial similarities, l('''X'''(n)), over all [[bottom-up signals (bottom-up neural signals)|bottom-up signals]] ; therefore it forces the mind to account for every signal (even if one term in the product is zero, the product is zero, the similarity is low and the [[knowledge instinct]] is not satisfied); this is a reflection of the first principle. Second, before perception occurs, the [[mind]] does not know which object gave rise to a signal from a particular retinal neuron. Therefore a partial similarity measure is constructed so that it treats each model as an alternative (a sum over [[concept-models]]) for each input neuron signal. Its constituent elements are conditional partial similarities between signal '''X'''(n) and model '''M<sub>m</sub>''', l('''X'''(n)|m). This measure is “conditional” on object m being present (Perlovsky 2001), therefore, when combining these quantities into the overall similarity measure, L, they are multiplied by r(m), which represent a probabilistic measure of object m actually being present. Combining these elements with the two principles noted above, a similarity measure is constructed as follows:
:<big> L({'''X'''},{'''M'''}) = ∏<sub>n=1..N</sub> ∑<sub>m=1..M</sub> r(m) l('''X'''(n) | m). </big>
==References==
|