Content deleted Content added
No edit summary |
No edit summary |
||
Line 4:
<!-- End of AfD message, feel free to edit beyond this point -->
'''Neural modeling field''' (NMF) is a mathematical framework for [[machine learning]] which combines the ideas from [[neural networks]], [[fuzzy logic]], and [[model based recognition]].<ref>[http://www.oup.com/us/catalog/he/subject/Engineering/ElectricalandComputerEngineering/ComputerEngineering/NeuralNetworks/?view=usa&ci=9780195111620]: Perlovsky, L.I. 2001. Neural Networks and Intellect: using model based concepts. New York: Oxford University Press</ref><ref> Perlovsky, L.I. (2006). Toward Physics of the Mind: Concepts, Emotions, Consciousness, and Symbols. Phys. Life Rev. 3(1), pp.22-55.</ref> This framework has been developed by [[Leonid Perlovsky]] at the [[AFRL]]. NMF is interpreted as the description of mind’s mechanisms, including [[concepts]], [[emotions]], [[instincts]], [[imagination]], [[thinking]], and [[understanding]]. NMF is a multi-level, hetero-hierarchical system. At each level in NMF there are concept-models encapsulating the knowledge; they generate so-called top-down signals, interacting with input, bottom-up signals. These interactions are governed by dynamic equations, which drive concept-model learning, adaptation, and formation of new concept-models for better correspondence to the input, bottom-up signals.
Line 16:
Learning is an essential part of perception and cognition, and in NMF theory it is driven by the dynamics that increase a similarity measure between the sets of models and signals, L({'''X'''},{'''M'''}). The similarity measure is a function of model parameters and associations between the input bottom-up signals and top-down, concept-model signals. In constructing a mathematical description of the similarity measure, it is important to acknowledge two principles
:''First'', the visual field content is unknown before perception occurred :'' Therefore, the similarity measure is constructed so that it accounts for all bottom-up signals, X(n), Line 37 ⟶ 41:
==Learning in NMF using Dynamic logic algorithm==
The learning process consists
The maximization of similarity L is done as follows. First, the unknown parameters {'''S'''<sub>m</sub>} are randomly initialized. Then the association variables f(m|n) are computed,
Line 52 ⟶ 56:
The following theorem has been proved (Perlovsky 2001):▼
▲The following theorem has been proved:
''Theorem''. Equations (3), (4), and (5) define a convergent dynamic NMF system with stationary states defined by max{S<sub>m</sub>}L.
Line 61 ⟶ 64:
Practically, when solving the equations through successive iterations, f(m|n) can be recomputed at every iteration using (3), as opposed to incremental formula (5).
The proof of the above theorem contains a proof that similarity L increases at each iteration. This has a psychological interpretation that the
==Example of Dynamic Logic Operations==
Finding patterns below noise can be an exceedingly complex problem. If an exact pattern shape is not known and depends on unknown parameters, these parameters should be found by fitting the pattern model to the data. However, when the locations and orientations of patterns are not known, it is not clear which subset of the data points should be selected for fitting. A standard approach for solving this kind of problem
To apply NMF and dynamic logic to this problem one needs to develop parametric adaptive models of expected patterns. The models and conditional partial similarities for this case are described in details in<ref> Linnehan, R., Mutz, Perlovsky, L.I., C., Weijers, B., Schindler, J., Brockett, R. (2003). Detection of Patterns Below Clutter in Images. Int. Conf. On Integration of Knowledge Intensive Multi-Agent Systems, Cambridge, MA Oct.1-3, 2003.</ref>: a uniform model for noise, Gaussian blobs for highly-fuzzy, poorly resolved patterns, and parabolic models for ‘smiles’ and ‘frowns’. The number of computer operations in this example was about 10<sup>10</sup>. Thus, a problem that was not solvable due to combinatorial complexity becomes solvable using dynamic logic.
During an adaptation process,
There are several types of models: one uniform model describing noise (it is not shown) and a variable number of blob models and parabolic models; their number, ___location, and curvature are estimated from the data. Until about stage (g) the algorithm used simple blob models, at (g) and beyond, the algorithm decided that it needs more complex parabolic models to describe the data. Iterations stopped at (h), when similarity stopped increasing.
Line 76 ⟶ 79:
==Neural modeling fields hierarchical organization==
Above, a single processing level in a hierarchical NMF system was described. At each level of
The activated models initiate other actions. They serve as input signals to the next processing level, where more general concept-models are recognized or created. Output signals from a given level, serving as input to the next level, are the model activation signals, a<sub>m</sub>, defined as
|