Content deleted Content added
m clean up, typo(s) fixed: Therefore → Therefore,, ’s → 's |
GreenC bot (talk | contribs) Rescued 1 archive link; reformat 1 link. Wayback Medic 2.5 per WP:USURPURL and JUDI batch #20 |
||
(12 intermediate revisions by 7 users not shown) | |||
Line 1:
'''Neural modeling field''' ('''NMF
Automatic buried mine detection using the maximum likelihoodadaptive neural system (MLANS), in Proceedings of ''Intelligent Control (ISIC)'', 1998. Held jointly with ''IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), Intelligent Systems and Semiotics (ISAS)''</ref><ref>{{usurped|1=[https://archive.today/20130221212719/http://www.mdatechnology.net/techprofile.aspx?id=227
<ref>[http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=4274797]{{dead link|date=September 2024|bot=medic}}{{cbignore|bot=medic}}: Cangelosi, A.; Tikhanoff, V.; Fontanari, J.F.; Hourdakis, E., Integrating Language and Cognition: A Cognitive Robotics Approach, Computational Intelligence Magazine, IEEE, Volume 2, Issue 3, Aug. 2007 Page(s):65 - 70</ref><ref>[http://spie.org/x648.xml?product_id=521387&showAbstracts=true&origin_id=x648]: Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense III (Proceedings Volume), Editor(s): Edward M. Carapezza, Date: 15 September 2004,{{ISBN|978-0-8194-5326-6}}, See Chapter: ''Counter-terrorism threat prediction architecture''</ref>
This framework has been developed by [[Leonid Perlovsky]] at the [[AFRL]]. NMF is interpreted as a mathematical description of the [[Cognition|mind's mechanisms]], including [[concept]]s, [[emotions]], [[instincts]], [[imagination]], [[thinking]], and [[understanding]]. NMF is a multi-level, [[Heterarchy|hetero-hierarchical]] system. At each level in NMF there are [[Schema (psychology)|concept-models]] encapsulating the knowledge; they generate so-called [[Bottom–up and top–down design|top-down signals, interacting with input, bottom-up signals]]. These interactions are governed by [[Dynamical system|dynamic equations]], which drive concept-model learning, adaptation, and formation of new concept-models for better correspondence to the input, bottom-up signals.
==Concept models and similarity measures==
Line 33:
:<math> L( \{\vec X(n)\}, \{\vec M_m( \vec S_m, n)\} ) = \prod_{n=1}^N{l(\vec X(n))}.</math> (1)
This expression contains a product of partial similarities, l('''X'''(n)), over all bottom-up signals; therefore it forces the NMF system to account for every signal (even if one term in the product is zero, the product is zero, the similarity is low and the knowledge instinct is not satisfied); this is a reflection of the first principle. Second, before perception occurs, the mind does not know which object gave rise to a signal from a particular retinal neuron. Therefore, a partial similarity measure is constructed so that it treats each model as an alternative (a sum over concept-models) for each input neuron signal. Its constituent elements are conditional partial similarities between signal '''X'''(n) and model '''M<sub>m</sub>''', l('''X'''(n)|m). This measure is
:<math> L( \{\vec X(n)\}, \{\vec M_m( \vec S_m, n)\} ) = \prod_{n=1}^N{ \sum_{m=1}^M { r(m) l(\vec X(n) | m) } }.</math> (2)
The structure of the expression above follows standard principles of the probability theory: a summation is taken over alternatives, m, and various pieces of evidence, n, are multiplied. This expression is not necessarily a probability, but it has a probabilistic structure. If learning is successful, it approximates probabilistic description and leads to near-optimal Bayesian decisions. The name
Note that in probability theory, a product of probabilities usually assumes that evidence is independent. Expression for L contains a product over n, but it does not assume independence among various signals '''X'''(n). There is a dependence among signals due to concept-models: each model '''M<sub>m</sub>'''('''S<sub>m</sub>''',n) predicts expected signal values in many neurons n.
During the learning process, concept-models are constantly modified. Usually, the functional forms of models, '''M<sub>m</sub>'''('''S<sub>m</sub>''',n), are all fixed and learning-adaptation involves only model parameters, '''S<sub>m</sub>'''. From time to time a system forms a new concept, while retaining an old one as well; alternatively, old concepts are sometimes merged or eliminated. This requires a modification of the similarity measure L; The reason is that more models always result in a better fit between the models and data. This is a well known problem, it is addressed by reducing similarity L using a
==Learning in NMF using dynamic logic algorithm==
The learning process consists of estimating model parameters '''S''' and associating signals with concepts by maximizing the similarity L. Note that all possible combinations of signals and models are accounted for in expression (2) for L. This can be seen by expanding a sum and multiplying all the terms resulting in M<sup>N</sup> items, a huge number. This is the number of combinations between all signals (N) and all models (M). This is the source of Combinatorial Complexity, which is solved in NMF by utilizing the idea of [[Perlovsky|dynamic logic]]
The maximization of similarity L is done as follows. First, the unknown parameters {'''S'''<sub>m</sub>} are randomly initialized. Then the association variables f(m|n) are computed,
Line 68:
==Example of dynamic logic operations==
Finding patterns below noise can be an exceedingly complex problem. If an exact pattern shape is not known and depends on unknown parameters, these parameters should be found by fitting the pattern model to the data. However, when the locations and orientations of patterns are not known, it is not clear which subset of the data points should be selected for fitting. A standard approach for solving this kind of problem is multiple hypothesis testing (Singer et al. 1974). Since all combinations of subsets and models are exhaustively searched, this method faces the problem of combinatorial complexity. In the current example, noisy
To apply NMF and dynamic logic to this problem one needs to develop parametric adaptive models of expected patterns. The models and conditional partial similarities for this case are described in details in:<ref>Linnehan, R., Mutz, Perlovsky, L.I., C., Weijers, B., Schindler, J., Brockett, R. (2003). Detection of Patterns Below Clutter in Images. Int. Conf. On Integration of Knowledge Intensive Multi-Agent Systems, Cambridge, MA Oct.1-3, 2003.</ref> a uniform model for noise, Gaussian blobs for highly-fuzzy, poorly resolved patterns, and parabolic models for
During an adaptation process, initially fuzzy and uncertain models are associated with structures in the input signals, and fuzzy models become more definite and crisp with successive iterations. The type, shape, and number, of models are selected so that the internal representation within the system is similar to input signals: the NMF concept-models represent structure-objects in the signals. The figure below illustrates operations of dynamic logic. In Fig. 1(a) true
There are several types of models: one uniform model describing noise (it is not shown) and a variable number of blob models and parabolic models; their number, ___location, and curvature are estimated from the data. Until about stage (g) the algorithm used simple blob models, at (g) and beyond, the algorithm decided that it needs more complex parabolic models to describe the data. Iterations stopped at (h), when similarity stopped increasing.
[[File:ExampleOfApplicationOfDynamicLogicToNoisyImage.JPG|center|frame| Fig.1. Finding
==Neural modeling fields hierarchical organization==
Line 83:
The activated models initiate other actions. They serve as input signals to the next processing level, where more general concept-models are recognized or created. Output signals from a given level, serving as input to the next level, are the model activation signals, a<sub>m</sub>, defined as
a<sub>m</sub> =
The hierarchical NMF system is illustrated in Fig. 2. Within the hierarchy of the mind, each concept-model finds its
[[File:NMF Hierarchy.JPG|center|frame| Fig.2. Hierarchical NMF system. At each level of a hierarchy there are models, similarity measures, and actions (including adaptation, maximizing the knowledge instinct - similarity). High levels of partial similarity measures correspond to concepts recognized at a given level. Concept activations are output signals at this level and they become input signals to the next level, propagating knowledge up the hierarchy.]]
From time to time a system forms a new concept or eliminates an old one. At every level, the NMF system always keeps a reserve of vague (fuzzy) inactive concept-models. They are inactive in that their parameters are not adapted to the data; therefore their similarities to signals are low. Yet, because of a large vagueness (covariance) the similarities are not exactly zero. When a new signal does not fit well into any of the active models, its similarities to inactive models automatically increase (because first, every piece of data is accounted for, and second, inactive models are vague-fuzzy and potentially can
==References==
Line 97:
* [[Leonid Perlovsky]]
[[Category:Machine learning]]
|