Neural modeling fields: Difference between revisions

Content deleted Content added
Romanilin (talk | contribs)
No edit summary
Romanilin (talk | contribs)
No edit summary
Line 41:
The learning process consists in estimating model parameters S and associating signals with concepts by maximizing the similarity L. Note, all possible combinations of signals and models are accounted for in expression for L. This can be seen by expanding a sum and multiplying all the terms; it would result in MN items, a huge number. This is the number of combinations between all signals (N) and all models (M). Here is the source of Combinatorial Complexity of many algorithms used in the past. For example, a popular multiple hypothesis testing algorithm<ref>Singer, R.A., Sea, R.G. and Housewright, R.B. (1974). Derivation and Evaluation of Improved Tracking Filters for Use in Dense Multitarget Environments, IEEE Transactions on Information Theory, IT-20, pp. 423-432.</ref> attempts to maximize similarity L over model parameters and associations between signals and models, in two steps. First it takes one of the MN items, which is one particular association between signals and models; and maximizes it over model parameters. Second, the largest item is selected (that is the best association for the best set of parameters). Such a program inevitably faces a wall of Combinatorial Complexity, the number of computations on the order of MN.
NMF solves this problem by using [[dynamic logic (neural)|dynamic logic]]<ref>Perlovsky, L.I. (1996). Mathematical Concepts of Intellect. Proc. World Congress on Neural Networks, San Diego, CA; Lawrence Erlbaum Associates, NJ, pp.1013-16</ref>, <ref>Perlovsky, L.I.(1997). Physical Concepts of Intellect. Proc. Russian Academy of Sciences, 354(3), pp. 320-323.</ref>. An important aspect of dynamic logic is ''matching vagueness or fuzziness of similarity measures to the uncertainty of models''. Initially, parameter values are not known, and uncertainty of models is high; so is the fuzziness of the similarity measures. In the process of learning, models become more accurate, and the similarity measure more crisp, the value of the similarity increases. This is the mechanism of dynamic logic. Mathematics of dynamic logicis described in a separate article.
 
==Example of Dynamic Logic Operations==
 
Finding patterns below noise can be an exceedingly complex problem. If an exact pattern shape is not known and depends on unknown parameters, these parameters should be found by fitting the pattern model to the data. However, when the locations and orientations of patterns are not known, it is not clear which subset of the data points should be selected for fitting. A standard approach for solving this kind of problem, which has already been discussed, is multiple hypothesis testing (Singer et al 1974). Here, since all combinations of subsets and models are exhaustively searched, it faces the problem of combinatorial complexity. In the current example, we are looking for ‘smile’ and ‘frown’ patterns in noise shown in Fig.1a without noise, and in Fig.1b with noise, as actually measured. The true number of patterns is 3, which is not known. Therefore, at least 4 patterns should be fit to the data, to decide that 3 patterns fit best. The image size in this example is 100x100 = 10,000 points. If one attempts to fit 4 models to all subsets of 10,000 data points, computation of complexity, M<sup>N</sup> ~ 10<sup>6000</sup>. An alternative computation by searching through the parameter space, yields lower complexity: each pattern is characterized by a 3-parameter parabolic shape. Fitting 4x3=12 parameters to 100x100 grid by a brute-force testing would take about 10<sup>32</sup> to 10<sup>40</sup> operations, still a prohibitive computational complexity.
To apply NMF and dynamic logic to this problem one needs to develop parametric adaptive models of expected patterns. The models and conditional partial similarities for this case are described in details in<ref> Linnehan, R., Mutz, Perlovsky, L.I., C., Weijers, B., Schindler, J., Brockett, R. (2003). Detection of Patterns Below Clutter in Images. Int. Conf. On Integration of Knowledge Intensive Multi-Agent Systems, Cambridge, MA Oct.1-3, 2003.</ref>: a uniform model for noise, Gaussian blobs for highly-fuzzy, poorly resolved patterns, and parabolic models for ‘smiles’ and ‘frowns’. The number of computer operations in this example was about 10<sup>10</sup>. Thus, a problem that was not solvable due to combinatorial complexity becomes solvable using dynamic logic.
 
During an adaptation process, initial fuzzy and uncertain models are associated with structures in the input signals, and fuzzy models become more definite and crisp with successive iterations. The type, shape, and number, of models are selected so that the internal representation within the system is similar to input signals: the NMF concept-models represent structure-objects in the signals. The figure below illustrates operations of dynamic logic. In Fig. 1(a) true ‘smile’ and ‘frown’ patterns are shown without noise; (b) actual image available for recognition (signal is below noise, signal-to-noise ratio is between –2dB and –0.7dB); (c) an initial fuzzy model, a large fuzziness corresponds to uncertainty of knowledge; (d) through (m) show improved models at various iteration stages (total of 22 iterations). Every five iterations the algorithm tried to increase or decrease the number of models. Between iterations (d) and (e) the algorithm decided, that it needs three Gaussian models for the ‘best’ fit.
 
There are several types of models: one uniform model describing noise (it is not shown) and a variable number of blob models and parabolic models; their number, ___location, and curvature are estimated from the data. Until about stage (g) the algorithm used simple blob models, at (g) and beyond, the algorithm decided that it needs more complex parabolic models to describe the data. Iterations stopped at (h), when similarity stopped increasing.