Content deleted Content added
No edit summary |
No edit summary |
||
Line 52:
[[Image:ExampleOfApplicationOfDynamicLogicToNoisyImage.JPG |center |frame| Fig.1. Finding ‘smile’ and ‘frown’ patterns in noise, an example of dynamic logic operation: (a) true ‘smile’ and ‘frown’ patterns are shown without noise; (b) actual image available for recognition (signal is below noise, signal-to-noise ratio is between –2dB and –0.7dB); (c) an initial fuzzy blob-model, the fuzziness corresponds to uncertainty of knowledge; (d) through (m) show improved models at various iteration stages (total of 22 iterations). Between stages (d) and (e) the algorithm tried to fit the data with more than one model and decided, that it needs three blob-models to ‘understand’ the content of the data. There are several types of models: one uniform model describing noise (it is not shown) and a variable number of blob-models and parabolic models, which number, ___location, and curvature are estimated from the data. Until about stage (g) the algorithm ‘thought’ in terms of simple blob models, at (g) and beyond, the algorithm decided that it needs more complex parabolic models to describe the data. Iterations stopped at (m), when similarity L stopped increasing. This example is discussed in more details in (Linnehan et al 2003).]]
==Neural modeling fields hierarchical organization==
Above, we described a single processing level in a hierarchical NMF system. At each level of a hierarchy there are input signals from lower levels, models, similarity measures (L), emotions, which are changes in similarity, and actions; actions include adaptation, behavior satisfying the knowledge instinct – maximization of similarity. An input to each level is a set of signals '''X'''(n), or in neural terminology, an input field of neuronal activations. The result of signal processing at a given level are activated models, or concepts m recognized in the input signals n; these models along with the corresponding instinctual signals and emotions may activate behavioral models and generate behavior at this level.
The activated models initiate other actions. They serve as input signals to the next processing level, where more general concept-models are recognized or created. Output signals from a given level, serving as input to the next level, are the model activation signals, a<sub>m</sub>, defined as
a<sub>m</sub> = ∑<sub>n=1..N</sub> f(m|n).
The hierarchical NMF system is illustrated in Fig. 2. Within the hierarchy of the mind, each concept-model finds its “mental” meaning and purpose at a higher level (in addition to other purposes). For example, consider a concept-model “chair.” It has a “behavioral” purpose of initiating sitting behavior (if sitting is required by the body), this is the “bodily” purpose at the same hierarchical level. In addition, it has a “purely mental” purpose at a higher level in the hierarchy, a purpose of helping to recognize a more general concept, say of a “concert hall,” a model of which contains rows of chairs.
Fig.2. Hierarchical NMF system. At each level of a hierarchy there are models, similarity measures, and actions (including adaptation, maximizing the knowledge instinct - similarity). High levels of partial similarity measures correspond to concepts recognized at a given level. Concept activations are output signals at this level and they become input signals to the next level, propagating knowledge up the hierarchy.
Models at higher levels in the hierarchy are more general than models at lower levels. For example, at the very bottom of the hierarchy, if we consider vision system, models correspond (roughly speaking) to retinal [[ganglion cells]] and perform similar functions; they detect simple features in the visual field; at higher levels, models correspond to functions performed at V1 and higher up in the [[visual cortex]], that is detection of more complex features, such as contrast edges, their directions, elementary moves, etc. Visual hierarchical structures and models are studied in details<ref>Grossberg, S. (1988). Neural Networks and Natural Intelligence. MIT Press, Cambridge, MA</ref>,<ref>Zeki, S. (1993). A Vision of the Brain Blackwell, Oxford, England</ref>. At still higher cognitive levels, models correspond to objects, to relationships among objects, to situations, and relationships among situations, etc. (Perlovsky 2001, 2006). Still higher up are even more general models of complex cultural notions and relationships, like family, love, friendship, and abstract concepts, like law, rationality, etc. Contents of these models correspond to cultural wealth of knowledge, including writings of Shakespeare and Tolstoy; mechanisms of development of these models are reviewed in (Perlovsky 2006). At the top of the hierarchy of the mind, according to Kantian analysis<ref>Kant, I. (1790). Critique of Judgment. Tr. J.H. Bernard, 1914, 2nd ed., Macmillan & Co., London</ref>, are models of the meaning and purpose of our existence, unifying our knowledge, and the corresponding behavioral models aimed at achieving this meaning. Improvement of these top models of the meaning and purpose satisfies the knowledge instinct at the highest level of the hierarchy and is felt as emotions of [[beautiful and sublime (NMF)|beautiful and sublime]].
From time to time, as discussed, a system forms a new concept or eliminates an old one. Many pattern recognition algorithms and neural networks lack this important ability of the mind. It can be modeled mathematically in several ways; adaptive resonance theory (ART) uses vigilance threshold, which is compared to a similarity measure<ref>Carpenter, G.A. & Grossberg, S. (1987). A massively parallel architecture for a self-organizing neural pattern recognition machine, Computer Vision, Graphics and Image Processing, 37, 54-115.</ref>. A somewhat different mechanism of NMF works as follows. At every level, the system always keeps a reserve of vague (fuzzy) inactive concept-models. They are inactive in that their parameters are not adapted to the data; therefore their similarities to signals are low. Yet, because of a large vagueness (covariance) the similarities are not exactly zero. When a new signal does not fit well into any of the active models, its similarities to inactive models automatically increase (because first, every piece of data is accounted for, and second, inactive models are vague-fuzzy and potentially can “grab” every signal that does not fit into more specific, less fuzzy, active models. When the activation signal a<sub>m</sub> for an inactive model, m, exceeds a certain threshold, the model is activated. Similarly, when an activation signal for a particular model falls below a threshold, the model is deactivated. Thresholds for activation and deactivation are set usually based on information existing at a higher hierarchical level (prior information, system resources, numbers of activated models of various types, etc.). Activation signals for active models at a particular level { a<sub>m</sub> } form a “neuronal field,” which serve as input signals to the next level, where more abstract and more general concepts are formed, and so on along the hierarchy toward higher models of meaning and purpose.
==References==
|