Neural modeling fields: Difference between revisions

Content deleted Content added
add 'the'
GreenC bot (talk | contribs)
Rescued 1 archive link; reformat 1 link. Wayback Medic 2.5 per WP:USURPURL and JUDI batch #20
 
(10 intermediate revisions by 5 users not shown)
Line 1:
'''Neural modeling field''' ('''NMF)''') is a mathematical framework for [[machine learning]] which combines ideas from [[neural networks]], [[fuzzy logic]], and [[model based recognition]]. It has also been referred to as '''modeling fields''', '''modeling fields theory''' (MFT), '''Maximum likelihood artificial neural networks''' (MLANS).<ref>[http://www.oup.com/us/catalog/he/subject/Engineering/ElectricalandComputerEngineering/ComputerEngineering/NeuralNetworks/?view=usa&ci=9780195111620]: Perlovsky, L.I. 2001. Neural Networks and Intellect: using model based concepts. New York: Oxford University Press</ref><ref>Perlovsky, L.I. (2006). Toward Physics of the Mind: Concepts, Emotions, Consciousness, and Symbols. Phys. Life Rev. 3(1), pp.22-55.</ref><ref>[http://ieeexplore.ieee.org/xpl/absprintf.jsp?arnumber=713700&page=FREE]{{dead link|date=September 2024|bot=medic}}{{cbignore|bot=medic}}: Deming, R.W.,
Automatic buried mine detection using the maximum likelihoodadaptive neural system (MLANS), in Proceedings of ''Intelligent Control (ISIC)'', 1998. Held jointly with ''IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), Intelligent Systems and Semiotics (ISAS)''</ref><ref>{{usurped|1=[https://archive.today/20130221212719/http://www.mdatechnology.net/techprofile.aspx?id=227 ]}}: MDA Technology Applications Program web site</ref>
<ref>[http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=4274797]{{dead link|date=September 2024|bot=medic}}{{cbignore|bot=medic}}: Cangelosi, A.; Tikhanoff, V.; Fontanari, J.F.; Hourdakis, E., Integrating Language and Cognition: A Cognitive Robotics Approach, Computational Intelligence Magazine, IEEE, Volume 2, Issue 3, Aug. 2007 Page(s):65 - 70</ref><ref>[http://spie.org/x648.xml?product_id=521387&showAbstracts=true&origin_id=x648]: Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense III (Proceedings Volume), Editor(s): Edward M. Carapezza, Date: 15 September 2004,{{ISBN|978-0-8194-5326-6}}, See Chapter: ''Counter-terrorism threat prediction architecture''</ref>
This framework has been developed by [[Leonid Perlovsky]] at the [[AFRL]]. NMF is interpreted as a mathematical description of the [[Cognition|mind's mechanisms]], including [[concept]]s, [[emotions]], [[instincts]], [[imagination]], [[thinking]], and [[understanding]]. NMF is a multi-level, [[Heterarchy|hetero-hierarchical]] system. At each level in NMF there are [[Schema (psychology)|concept-models]] encapsulating the knowledge; they generate so-called [[Bottom–up and top–down design|top-down signals, interacting with input, bottom-up signals]]. These interactions are governed by [[Dynamical system|dynamic equations]], which drive concept-model learning, adaptation, and formation of new concept-models for better correspondence to the input, bottom-up signals.
 
==Concept models and similarity measures==
Line 45:
==Learning in NMF using dynamic logic algorithm==
 
The learning process consists of estimating model parameters '''S''' and associating signals with concepts by maximizing the similarity L. Note that all possible combinations of signals and models are accounted for in expression (2) for L. This can be seen by expanding a sum and multiplying all the terms resulting in M<sup>N</sup> items, a huge number. This is the number of combinations between all signals (N) and all models (M). This is the source of Combinatorial Complexity, which is solved in NMF by utilizing the idea of [[Perlovsky|dynamic logic]],.<ref>Perlovsky, L.I. (1996). Mathematical Concepts of Intellect. Proc. World Congress on Neural Networks, San Diego, CA; Lawrence Erlbaum Associates, NJ, pp.1013-16</ref><ref>Perlovsky, L.I.(1997). Physical Concepts of Intellect. Proc. Russian Academy of Sciences, 354(3), pp. 320-323.</ref> An important aspect of dynamic logic is ''matching vagueness or fuzziness of similarity measures to the uncertainty of models''. Initially, parameter values are not known, and uncertainty of models is high; so is the fuzziness of the similarity measures. In the process of learning, models become more accurate, and the similarity measure more crisp, the value of the similarity increases.
 
The maximization of similarity L is done as follows. First, the unknown parameters {'''S'''<sub>m</sub>} are randomly initialized. Then the association variables f(m|n) are computed,
Line 97:
* [[Leonid Perlovsky]]
 
[[Category:Artificial intelligence]]
[[Category:Machine learning]]