Neural modeling fields: Difference between revisions

Content deleted Content added
Romanilin (talk | contribs)
Romanilin (talk | contribs)
Line 47:
First, assign any values to unknown parameters, {'''S'''<sub>m</sub>}. Then, compute association variables f(m|n),
 
:<big>f(m|n) = r(m) l('''X'''(n)|m) / &sum;<sub>m'=1..M</sub> r(m') l('''X'''(n)|m').</big>&nbsp;&nbsp;&nbsp; (3)
 
 
Equation for f(m|n) looks like the Bayes formula for a posteriori probabilities; if l(n|m) in the result of learning become conditional likelihoods, f(m|n) become Bayesian probabilities for signal n originating from object m. The dynamic logic of the Modeling Fields (MF) is defined as follows
 
 
:<big>df(m|n)/dt = f(m|n) &sum;<sub>m'=1..M</sub> {[&delta;<sub>mm'</sub> - f(m'|n)] • [&part;ln l(n|m')/&part;'''M'''<sub>m'</sub>] &part;'''M'''<sub>m'</sub>/&part;'''S'''<sub>m'</sub> d'''S'''<sub>m'</sub>/dt,</big> &nbsp;&nbsp;&nbsp; (4)
 
 
:<big>d'''S'''<sub>m</sub>/dt = &sum;<sub>n=1..N</sub> f(m|n)[&part;ln l(n|m)/&part;'''M'''<sub>m</sub>]&part;'''M'''<sub>m</sub>/&part;'''S'''<sub>m</sub>,</big> &nbsp;&nbsp;&nbsp; (5)
 
:<big>d'''S'''<sub>m</sub>/dt = &sum;<sub>n=1..N</sub> f(m|n)[&part;ln l(n|m)/&part;'''M'''<sub>m</sub>]&part;'''M'''<sub>m</sub>/&part;'''S'''<sub>m</sub>,</big>
 
 
The following theorem was proved:
 
''Theorem''. PreviousEquations equations(3), (4), and (5) define a convergent dynamic NMF system with stationary states defined by max{S<sub>m</sub>}L.
 
It follows that the stationary states of an MF system are the maximum similarity states satisfying the knowledge instinct. When partial similarities are specified as probability density functions (pdf), or likelihoods, the stationary values of parameters {S<sub>m</sub>} are asymptotically unbiased and efficient estimates of these parameters<ref>Cramer, H. (1946). Mathematical Methods of Statistics, Princeton University Press, Princeton NJ.</ref>. A computational complexity of dynamic logic is linear in N.
 
Practically, when solving the equations through successive iterations, f(m|n) can be recomputed at every iteration stepusing (3), as opposed to incremental formula (5).
 
The proof of the above theorem contains a proof that similarity L increases at each iteration. This has a psychological interpretation that the knowledge instinct is satisfied at each step, the corresponding emotion is positive and NMF-dynamic logic emotionally enjoys learning