Content deleted Content added
m linking |
linking |
||
Line 230:
===Statistical prediction===
SDM has been applied to statistical [[prediction]], the task of associating extremely large perceptual state vectors with future events. In conditions of near- or over- capacity, where the associative memory behavior of the model
breaks down, the processing performed by the model can be interpreted as that of a statistical predictor and each data counter in an SDM can be viewed as an independent estimate of the conditional probability of a binary function f being equal to the activation set defined by the counter's memory ___location.<ref>Rogers, David. [https://proceedings.neurips.cc/paper/1988/file/9b8619251a19057cff70779273e95aa6-Paper.pdf "Statistical prediction with Kanerva's sparse distributed memory."] Advances in neural information processing systems. 1989.</ref>
===Artificial general intelligence===
Line 251:
* Integer SDM that uses modular arithmetic integer vectors rather than binary vectors. This extension improves the representation capabilities of the memory and is more robust over normalization. It can also be extended to support forgetting and reliable sequence storage.<ref name="integerSDM">Snaider, Javier, and Stan Franklin. "[http://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS12/paper/viewFile/4409/4781 Integer sparse distributed memory]." Twenty-fifth international flairs conference. 2012.</ref>
* Using word vectors of larger size than address vectors: This extension preserves many of the desirable properties of the original SDM: auto-associability, content addressability, distributed storage and robustness over noisy inputs. In addition, it adds new functionality, enabling an efficient auto-associative storage of sequences of vectors, as well as of other data structures such as trees.<ref>{{cite journal | last1 = Snaider | first1 = Javier | last2 = Franklin | first2 = Stan | year = 2012 | title = Extended sparse distributed memory and sequence storage | url = https://www.semanticscholar.org/paper/20298cddb815e5bcbc055415c6a62865c076b3b9| journal = Cognitive Computation | volume = 4 | issue = 2| pages = 172–180 | doi=10.1007/s12559-012-9125-8| s2cid = 14319722 }}</ref>
* Constructing SDM from [[Biological neuron model|Spiking Neurons]]: Despite the biological likeness of SDM most of the work undertaken to demonstrate its capabilities to date has used highly artificial neuron models which abstract away the actual behaviour of [[neurons]] in the [[brain]]. Recent work by [[Steve Furber]]'s lab at the [[University of Manchester]]<ref>{{cite journal | last1 = Furber | first1 = Steve B. |display-authors=etal | year = 2004 | title = Sparse distributed memory using N-of-M codes | journal = Neural Networks | volume = 17 | issue = 10| pages = 1437–1451 | doi=10.1016/j.neunet.2004.07.003| pmid = 15541946 }}</ref><ref>Sharp, Thomas: "[https://studentnet.cs.manchester.ac.uk/resources/library/thesis_abstracts/MSc09/FullText/SharpThomas.pdf Application of sparse distributed memory to the Inverted Pendulum Problem]". Diss. University of Manchester, 2009. URL: http://studentnet.cs.manchester.ac.uk/resources/library/thesis_abstracts/MSc09/FullText/SharpThomas.pdf</ref><ref>Bose, Joy. [https://www.academia.edu/download/7385022/bose07_phd.pdf Engineering a Sequence Machine Through Spiking Neurons Employing Rank-order Codes]. Diss. University of Manchester, 2007.</ref> proposed adaptations to SDM, e.g. by incorporating N-of-M rank codes<ref>Simon Thorpe and Jacques Gautrais. [https://www.researchgate.net/profile/Jacques-Gautrais/publication/285068799_Rank_order_coding_Computational_neuroscience_trends_in_research/links/587ca2e108ae4445c069772a/Rank-order-coding-Computational-neuroscience-trends-in-research.pdf Rank order coding.] In Computational Neuroscience: Trends in research, pages 113–118. Plenum Press, 1998.</ref><ref>{{cite journal | last1 = Furber | first1 = Stephen B. |display-authors=etal | year = 2007 | title = Sparse distributed memory using rank-order neural codes | journal = IEEE Transactions on Neural Networks| volume = 18 | issue = 3| pages = 648–659 | doi=10.1109/tnn.2006.890804| pmid = 17526333 | citeseerx = 10.1.1.686.6196 | s2cid = 14256161 }}</ref> into how [[Neural coding#Population coding|populations of neurons]] may encode information—which may make it possible to build an SDM variant from biologically plausible components. This work has been incorporated into [[SpiNNaker|SpiNNaker (Spiking Neural Network Architecture)]] which is being used as the [[Neuromorphic engineering|Neuromorphic Computing]] Platform for the [[Human Brain Project]].<ref>{{cite journal | last1 = Calimera | first1 = A | last2 = Macii | first2 = E | last3 = Poncino | first3 = M | year = 2013 | title = The Human Brain Project and neuromorphic computing | journal = Functional Neurology | volume = 28 | issue = 3| pages = 191–6 | pmid = 24139655 | pmc=3812737}}</ref>
* Non-random distribution of locations:<ref>{{cite journal | last1 = Hely | first1 = Tim | last2 = Willshaw | first2 = David J. | last3 = Hayes | first3 = Gillian M. | year = 1997 | title = A new approach to Kanerva's sparse distributed memory | url = https://semanticscholar.org/paper/2f55ae4083ca073344badc416b83b00fef0db04f| journal = IEEE Transactions on Neural Networks| volume = 8 | issue = 3| pages = 791–794 | doi=10.1109/72.572115| pmid = 18255679 | s2cid = 18628649 }}</ref><ref>Caraig, Lou Marvin. "[https://arxiv.org/abs/1207.5774 A New Training Algorithm for Kanerva's Sparse Distributed Memory]." arXiv preprint arXiv:1207.5774 (2012).</ref> Although the storage locations are initially distributed randomly in the binary N address space, the final distribution of locations depends upon the input patterns presented, and may be non-random thus allowing better flexibility and [[Generalization error|generalization]]. The data pattern is first stored at locations which lie closest to the input address. The signal (i.e. data pattern) then spreads throughout the memory, and a small percentage of the signal strength (e.g. 5%) is lost at each subsequent ___location encountered. Distributing the signal in this way removes the need for a select read/write radius, one of the problematic features of the original SDM. All locations selected in a write operation do not now receive a copy of the original binary pattern with equal strength. Instead they receive a copy of the pattern weighted with a real value from 1.0->0.05 to store in real valued counters (rather than binary counters in Kanerva's SDM). This rewards the nearest locations with a greater signal strength, and uses the natural architecture of the SDM to attenuate the signal strength. Similarly in reading from the memory, output from the nearest locations is given a greater weight than from more distant locations. The new signal method allows the total signal strength received by a ___location to be used as a measure of the fitness of a ___location and is flexible to varying input (as the loss factor does not have to be changed for input patterns of different lengths).
* SDMSCue (Sparse Distributed Memory for Small Cues): Ashraf Anwar & Stan Franklin at The University of Memphis, introduced a variant of SDM capable of Handling Small Cues; namely SDMSCue in 2002. The key idea is to use multiple Reads/Writes, and space projections to reach a successively longer cue.<ref>{{Cite book|title = A Sparse Distributed Memory Capable of Handling Small Cues, SDMSCue|publisher = Springer US|date = 2005-01-01|isbn = 978-0-387-24048-0|pages = 23–38|series = IFIP — The International Federation for Information Processing|language = en|first1 = Ashraf|last1 = Anwar|first2 = Stan|last2 = Franklin|editor-first = Michael K.|editor-last = Ng|editor-first2 = Andrei|editor-last2 = Doncescu|editor-first3 = Laurence T.|editor-last3 = Yang|editor-first4 = Tau|editor-last4 = Leng|doi = 10.1007/0-387-24049-7_2}}</ref>
Line 269:
* [[Python (programming language)|Python]] and [[OpenCL]] implementation: https://github.com/msbrogli/sdm-framework<ref name="PMC4009432"/>
* [[APL (programming language)|APL]] implementation<ref>{{cite journal|doi=10.1145/144052.144142|title=WSDM: Weighted sparse distributed memory prototype expressed in APL|journal=ACM SIGAPL APL Quote Quad|volume=23|pages=235–242|year=1992|last1=Surkan|first1=Alvin J.}}</ref>
* [[LISP]] implementation for the [[Connection Machine]]<ref>Turk, Andreas, and Günther Görz. [https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.47.3163&rep=rep1&type=pdf "Kanerva's sparse distributed memory: an object-oriented implementation on the connection machine."] IJCAI. 1995.</ref>
* [[FPGA]] implementation<ref>{{cite journal | last1 = Silva | last2 = Tadeu Pinheiro | first2 = Marcus | last3 = Pádua Braga | first3 = Antônio | last4 = Soares Lacerda | first4 = Wilian | year = 2004 | title = Reconfigurable co-processor for kanerva's sparse distributed memory | url = https://www.researchgate.net/publication/220055514 | format = PDF | journal = Microprocessors and Microsystems | volume = 28 | issue = 3| pages = 127–134 | doi = 10.1016/j.micpro.2004.01.003 }}</ref>
* The original hardware implementation developed by [[NASA]]<ref name="flynn89"/>
Line 278:
* [[Associative neural memories]]<ref>Hassoun, Mohamad H. Associative neural memories. Oxford University Press, Inc., 1993.</ref>
* [[Autoassociative memory]]
* [[Binary spatter codes]]<ref>Kanerva, Pentti. [https://link.springer.com/chapter/10.1007/3-540-61510-5_146 "Binary spatter-coding of ordered K-tuples."] Artificial Neural Networks—ICANN 96. Springer Berlin Heidelberg, 1996. 869-873.</ref>
* [[Cerebellar model articulation controller|Associative-memory models of the cerebellum]]
* [[Content-addressable memory]]
Line 292:
* [[Memory networks]]<ref>Weston, Jason, Sumit Chopra, and Antoine Bordes. "Memory networks." arXiv preprint arXiv:1410.3916 (2014).</ref>
* [[Memory-prediction framework]]
* Pointer Networks<ref>Vinyals, Oriol, Meire Fortunato, and Navdeep Jaitly. "Pointer networks." arXiv preprint {{arXiv
* [[Random-access memory]] (as a special case of SDM)<ref name="psu"/>
* [[Random indexing]]<ref>Joshi, Aditya, Johan Halseth, and Pentti Kanerva. "Language Recognition using Random Indexing." arXiv preprint arXiv:1412.7026 (2014). https://arxiv.org/abs/1412.7026</ref>
|