Cellular neural network: Difference between revisions

Content deleted Content added
in-line citations needed
unreferenced, make more concise
Line 1:
{{Cleanup rewrite|There is a list of references, but provideproviding in-line citations. Also break into smaller more-readable sections.|date=December 2020}}
 
In [[computer science]] and [[machine learning]], '''cellular neural networks (CNN)''' or '''cellular nonlinear networks (CNN)''' are a [[parallel computing]] paradigm similar to [[neural networks]], with the difference that communication is allowed between neighbouring units only. Typical applications include [[image processing]], analyzing 3D surfaces, solving [[partial differential equation]]s, reducing non-visual problems to [[Geometry|geometric]] maps, modelling biological [[visual system|vision]] and other [[Sensory-motor coupling|sensory-motor]] organs.<ref>{{Cite book|last=Slavova|first=A.|url=https://books.google.com/books?id=bt4PUx8CZXIC&q=Cellular+neural+network|title=Cellular Neural Networks: Dynamics and Modelling|date=2003-03-31|publisher=Springer Science & Business Media|isbn=978-1-4020-1192-4|language=en}}</ref>
Line 45:
==Model of computation==
{{Section citations needed|date=December 2020}}
The dynamical behaviorsbehavior of CNN processors can be expressed mathematically as a series of ordinaryusing [[differential equations]], where each equation represents the state of an individual processing unit. The behavior of the entire CNN processor is defined by its initial conditions, the inputs, the cell interconnectinterconnections (topology and weights), and the cells themselves. One possible use of CNN processors is to generate and respond to signals of specific dynamical properties. For example, CNN processors have been used to generate multi-scroll chaos, [[Synchronization|synchronize]] with chaotic systems, and exhibit multi-level hysteresis. CNN processors are designed specifically to solve local, low-level, processor intensive problems expressed as a function of space and time. For example, CNN processors can be used to implement high-pass and low-pass filters and [[Mathematical morphology|morphological]] operators. They can also be used to approximate a wide range of [[Partial differential equations]] (PDE) such as heat dissipation and wave propagation.
 
=== Reaction-Diffusion ===
CNN processors can be used as [[Reaction–diffusion system|Reaction-Diffusion]] (RD) processors. RD processors are spatially invariant, topologically invariant, analog, parallel processors characterized by reactions, where two agents can combine to create a third agent, and [[diffusion]], the spreading of agents. RD processors are typically implemented through chemicals in a [[Petri dish]] (processor), light (input), and a camera (output) however RD processors can also be implemented through a multi-layer CNN processor. D processors can be used to create [[Voronoi diagrams]] and perform [[Skeletonization|skeletonisation]]. The main difference between the chemical implementation and the CNN implementation is that CNN implementations are considerably faster than their chemical counterparts and chemical processors are spatially continuous whereas the CNN processors are spatially discrete. The most researched RD processor, Belousov-Zhabotinsky (BZ) processors, has already been simulated using a four-layer CNN processors and has been implemented in a semiconductor.
 
Like CA, computations can be performed through the generation and propagation of signals that either grow or change over time. [[Computation|Computations]] can occur within a signal or can occur through the interaction between signals. One type of processing, which uses signals and is gaining momentum is [[Signal processing|wave processing]], which involves the generation, expanding, and eventual collision of waves. Wave processing can be used to measure distances and find optimal paths. Computations can also occur through particles, gliders, solutions, and filterons localized structures that maintain their shape and velocity. Given how these structures interact/collide with each other and with static signals, they can be used to store information as states and implement different [[Boolean functions]]. Computations can also occur between complex, potentially growing or evolving localized behavior through worms, ladders, and pixel-snakes. In addition to storing states and performing [[Boolean function|Boolean functions]], these structures can interact, create, and destroy static structures.
 
=== Automata and Turing machines ===
Although CNN processors are primarily intended for analog calculations, certain types of CNN processors can implement any Boolean function, allowing simulating CA. Since some CA are [[Universal Turing machine]]s (UTM), capable of [[Simulation|simulating]] any algorithm can be performed on processors based on the [[von Neumann architecture]], that makes this type of CNN processors, universal CNN, a UTM. One CNN architecture consists of an additional layer. CNN processors have resulted in the simplest realization of [[Conway’s Game of Life]] and [[Rule 110|Wolfram’s Rule 110]], the simplest known universal [[Turing machine|Turing Machine]]. This unique, dynamical representation of an old systems, allows researchers to apply techniques and hardware developed for CNN to better understand important CA. Furthermore, the continuous state space of CNN processors, with slight modifications that have no equivalent in [[Cellular Automata]], creates [[Emergence|emergent]] behavior never seen before.
 
Any information processing platform that allows the construction of arbitrary [[Boolean function|Boolean functions]] is called universal, and as result, this class CNN processors are commonly referred to as universal CNN processors. The original CNN processors can only perform linearly separable Boolean functions. By translating functions from digital logic or look-up table domains into the CNN ___domain, some functions can be considerably simplified. For example, the nine-bit, odd parity generation logic, which is typically implemented by eight nested exclusive-or gates, can also be represented by a sum function and four nested absolute value functions. Not only is there a reduction in the function complexity, but the CNN implementation parameters can be represented in the continuous, real-number ___domain.
 
There are two methods by which to select a CNN processor along with a template or weights. The first is by synthesis, which involves determine the coefficients offline. This can be done by leveraging off previous work, i.e. libraries, papers, and articles, or by mathematically deriving co that best suits the problem. The other is through training the processor. Researchers have used [[back-propagation]] and [[genetic algorithms]] to learn and perform functions. Back-propagation algorithms tend to be faster, but genetic algorithms are useful because they provide a mechanism to find a solution in a discontinuous, noisy search space.
 
==Physical implementations==
==Technology==
{{Section citations needed|date=December 2020}}
There are toy models simulating CNN processors using [[Billiard ball|billiard balls]], but these are used for theoretical studies. In practice, CNN are physically implemented on hardware and current technologies such as [[Semiconductor|semiconductors]]. There are plans to migrate CNN processors to emerging technologies in the future.
An information processing platform remains nothing more than an intellectual exercise unless it can be implemented in hardware and integrated into a system. Although processors based on [[Billiard ball|billiard balls]] can be interesting, unless their implementation provides advantages for a system, the only purpose they serve is as a teaching device. CNN processors have been implemented using current technology and there are plans to implement CNN processors into future technologies. They include the necessary interfaces for programming and interfacing, and have been implemented in a variety of systems. What follows is a cursory examination of the different types of CNN processors available today, their advantages and disadvantages, and the future roadmap for CNN processors.
 
=== Semiconductors ===
CNN processors have been implemented and are currently available as semiconductors and there are plans to migrate CNN processors to emerging technologies in the future. Semiconductor-based CNN processors can be segmented into analog CNN processors, digital CNN processors, and CNN processors [[Emulator|emulated]] using digital processors. Analog CNN processors were the first to be developed. [[Analog computer]]s were fairly common during the 1950 and 1960s, but they gradually were replaced by digital computers the 1970s. Analog processors were considerably faster in certain applications such as optimizing differential equations and modeling nonlinearities, but the reason why analog computing lost favor was the lack of precision and the difficulty to configure an analog computer to solve a complex equation. Analog CNN processors share some of the same advantages as their predecessors, specifically speed. The first analog CNN processors were able to perform real-time ultra-high frame-rate (>10,000 frame/s) processing unachievable by digital processors. The analog implementation of CNN processors requires less area and consumes less power than their digital counterparts. Although the accuracy of analog CNN processors does not compare to their digital counterparts, for many applications, noise and process variances are small enough not to perceptually affect the image quality.
 
The first [[algorithm]]ically programmable, analog CNN processor was created in 1993. It was named the CNN Universal Processor because its internal controller allowed multiple templates to be performed on the same data set, thus simulating multiple layers and allowing for universal computation. Included in the design was a single layer 8x8 CCN, interfaces, analog memory, switching logic, and software. The processor was developed in order to determine CNN processor producibility and utility. The CNN concept proved promising and by 2000, there were at least six organizations designing algorithmically programmable, analog CNN processors. This is when AnaFocus, a mixed-signal semiconductor company that emerged from research at The University of Seville, introduced their ACE prototype CNN processor product line. Their first ACE processor contained 20x20 B/W processor units; their next ACE processor provided 48x48 grayscale processor units, and their latest ACE processor contains 128x128 grayscale processor units. Over time, not only did the number of processing elements increase, but their speed improved, the number of functions they can perform increased, and a seamless detector interface was integrated into the silicon (yielding considerably improved interface). The ability to embed the detector interface into the CNN processor allows for real-time interaction between the sensing and processing. AnaFocus has a multilayer CASE prototype CNN processors line. The latest CASE processor is a three layer 32x32 CNN processor. Their work in CNN processors is currently culminating in their soon-to-be-released, commercially available Eye-RIS product line that consists of all the processors, co-processors, software development kits, and support needed to program and integrate an analog processor into a system.