Content deleted Content added
No edit summary |
No edit summary |
||
Line 75:
From time to time, as discussed, a system forms a new concept or eliminates an old one. Many pattern recognition algorithms and neural networks lack this important ability of the mind. It can be modeled mathematically in several ways; adaptive resonance theory (ART) uses vigilance threshold, which is compared to a similarity measure<ref>Carpenter, G.A. & Grossberg, S. (1987). A massively parallel architecture for a self-organizing neural pattern recognition machine, Computer Vision, Graphics and Image Processing, 37, 54-115.</ref>. A somewhat different mechanism of NMF works as follows. At every level, the system always keeps a reserve of vague (fuzzy) inactive concept-models. They are inactive in that their parameters are not adapted to the data; therefore their similarities to signals are low. Yet, because of a large vagueness (covariance) the similarities are not exactly zero. When a new signal does not fit well into any of the active models, its similarities to inactive models automatically increase (because first, every piece of data is accounted for, and second, inactive models are vague-fuzzy and potentially can “grab” every signal that does not fit into more specific, less fuzzy, active models. When the activation signal a<sub>m</sub> for an inactive model, m, exceeds a certain threshold, the model is activated. Similarly, when an activation signal for a particular model falls below a threshold, the model is deactivated. Thresholds for activation and deactivation are set usually based on information existing at a higher hierarchical level (prior information, system resources, numbers of activated models of various types, etc.). Activation signals for active models at a particular level { a<sub>m</sub> } form a “neuronal field,” which serve as input signals to the next level, where more abstract and more general concepts are formed, and so on along the hierarchy toward higher models of meaning and purpose.
==Experimental evidence==
Perception as the process of interaction between top-down signals and bottom-up signals according to dynamic logic was confirmed in fMRI experiments.<ref> Schacter, D.L., Dobbins, I.G., & Schnyer, D.M. (2004). Specificity of priming: A cognitive neuroscience perspective. Nature Reviews Neuroscience, 5, 853-862</ref>,<ref> Bar, M., Kassam, K.S., Ghuman, A.S., Boshyan, J., Schmid, A.M., Dale, A.M., Hamalainen, M.S., Marinkovic, K., Schacter, D.L., Rosen, B.R., & Halgren, E. (2006). Top-down facilitation of visual recognition. Proceedings of the National Academy of Sciences USA, 103, 449-54.</ref>,<ref>Schacter, D.L. & Addis, D.R. (2007). The ghosts of past and future. Nature, 445, 27</ref>. During perception, top-down signals are activated before perception occurs; and initial top-down signals are driven by low-spatial frequency (vague) contents of images.
Imagination is created in visual cortex by top-down signals (Grossberg 1988; Zeki 1993). With this knowledge dynamic logic as a foundation of perception can be experimentally proven in 3 seconds. Just close eyes and imagine an object in front of you. Imagined objects are vaguer than perception with open eyes. So, perception occurs according to dynamic logic process ''from vague to crisp''.
|