Content deleted Content added
Line 36:
A psychological interpretation of maximizing similarity L is the [[Perlovsky | knowledge instinct]], a drive to learn, to maximize the knowledge.
==Dynamic logic, Mathematical description==
The learning process consists in estimating model parameters S and associating signals with concepts by maximizing the similarity L. Note, all possible combinations of signals and models are accounted for in expression for L. This can be seen by expanding a sum and multiplying all the terms; it would result in
NMF solves this problem by using [[
Dynamic logic maximizes similarity L, associates bottom-up signals X with top-down signals M, and finds values of parameters S<sub>m</sub>, without combinatorial complexity, as follows.
First, assign any values to unknown parameters, {S<sub>m</sub>}. Then, compute association variables f(m|n),
f(m|n) = r(m) l(X(n)|m) / ∑<sub>m'=1..M</sub> r(m') l(X(n)|m').
Equation for f(m|n) looks like the Bayes formula for a posteriori probabilities; if l(n|m) in the result of learning become conditional likelihoods, f(m|n) become Bayesian probabilities for signal n originating from object m. The dynamic logic of the Modeling Fields (MF) is defined as follows
df(m|n)/dt = f(m|n) ∑<sub>m'=1..M</sub> {[δ<sub>mm'</sub> - f(m'|n)] • [∂ln l(n|m')/∂M<sub>m'</sub>] ∂M<sub>m'</sub>/∂S<sub>m'</sub> dS<sub>m'</sub>/dt,
dSm/dt = ∑<sub>n=1..N</sub> f(m|n)[∂ln l(n|m)/∂M<sub>m</sub>]∂M<sub>m</sub>/∂S<sub>m</sub>,
The following theorem was proved:
''Theorem''. Previous equations define a convergent dynamic NMF system with stationary states defined by max{S<sub>m</sub>}L.
It follows that the stationary states of an MF system are the maximum similarity states satisfying the knowledge instinct. When partial similarities are specified as probability density functions (pdf), or likelihoods, the stationary values of parameters {S<sub>m</sub>} are asymptotically unbiased and efficient estimates of these parameters<ref>Cramer, H. (1946). Mathematical Methods of Statistics, Princeton University Press, Princeton NJ.</ref>. A computational complexity of dynamic logic is linear in N.
Practically, when solving the equations through successive iterations, f(m|n) can be recomputed at every iteration step as opposed to incremental formula.
The proof of the above theorem contains a proof that similarity L increases at each iteration. This has a psychological interpretation that the knowledge instinct is satisfied at each step, the corresponding emotion is positive and NMF-dynamic logic emotionally enjoys learning
==Example of Dynamic Logic Operations==
|