Content deleted Content added
m linking |
m →Implementation: HTTP to HTTPS for SourceForge |
||
(49 intermediate revisions by 14 users not shown) | |||
Line 1:
{{Short description|Mathematical model of memory}}
'''Sparse distributed memory''' ('''SDM''') is a mathematical model of human [[long-term memory]] introduced by [[Pentti Kanerva]] in 1988 while he was at [[Ames Research Center|NASA Ames Research Center]]. It is a generalized [[random-access memory]] (RAM) for long (e.g., 1,000 bit) binary words. These words serve as both addresses to and data for the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it, as measured by the number of mismatched bits (i.e., the [[Hamming distance]] between [[memory address]]es).<ref name=book>{{cite book|last=Kanerva|first=Pentti|title=Sparse Distributed Memory|year=1988|publisher=The MIT Press|isbn=978-0-262-11132-4}}</ref>▼
'''Sparse distributed memory''' ('''SDM''') is a mathematical model of human [[long-term memory]] introduced by [[Pentti Kanerva]] in 1988 while he was at [[Ames Research Center|NASA Ames Research Center]].<ref name="book" />
This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines – e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, etc. Sparse distributed memory is used for storing and retrieving large amounts (<math>2^{1000}</math> [[Bit|bits]]) of information without focusing on the accuracy but on similarity of information.<ref name="book2">{{cite book |last=Kanerva |first=Pentti |title=Sparse Distributed Memory |publisher=The MIT Press |year=1988 |isbn=978-0-262-11132-4}}</ref> There are some recent applications in robot navigation<ref>{{cite book |last1=Mendes |first1=Mateus |title=2008 IEEE International Conference on Robotics and Automation |last2=Crisostomo |first2=Manuel |last3=Coimbra |first3=A. Paulo |year=2008 |isbn=978-1-4244-1646-2 |pages=53–58 |chapter=Robot navigation using a sparse distributed memory |doi=10.1109/ROBOT.2008.4543186 |s2cid=10977460}}</ref> and experience-based robot manipulation.<ref>{{cite book |last1=Jockel |first1=S. |title=2008 IEEE International Conference on Robotics and Biomimetics |last2=Lindner |first2=F. |last3=Jianwei Zhang |year=2009 |isbn=978-1-4244-2678-2 |pages=1298–1303 |chapter=Sparse distributed memory for experience-based robot manipulation |doi=10.1109/ROBIO.2009.4913187 |s2cid=16650992}}</ref>
== General principle ==
▲
SDM implements transformation from logical space to physical space using distributed data representation and storage, similarly to [[Encoding (memory)|encoding]] processes in human memory.<ref>{{cite journal | last1 = Rissman | first1 = Jesse | last2 = Wagner | first2 = Anthony D. | year = 2012 | title = Distributed representations in memory: insights from functional brain imaging | journal = Annual Review of Psychology | volume = 63 | pages = 101–28 | doi=10.1146/annurev-psych-120710-100344| pmc = 4533899 | pmid=21943171}}</ref> A value corresponding to a logical address is stored into many physical addresses. This way of storing is robust and not deterministic. A memory cell is not addressed directly. If input data (logical addresses) are partially damaged at all, we can still get correct output data.<ref name="greb2"/>
Line 6 ⟶ 12:
==Definition==
Human memory has a tendency to [[Multiple trace theory|congregate memories]] based on similarities between them (although they may not be related), such as "firetrucks are red and apples are red".<ref name=ship>{{cite web|title=General Psychology|url=http://webspace.ship.edu/cgboer/memory.html|publisher=Shippensburg University|author=C. George Boeree|year=2002}}</ref> Sparse distributed memory is a mathematical representation of human memory, and uses [[Clustering high-dimensional data|high-dimensional space]] to help model the large amounts of memory that mimics that of the human neural network.<ref name=psu>{{cite journal|title=Sparse Distributed Memory and Related Models|pages=50–76|citeseerx=10.1.1.2.8403|publisher=Pennsylvania State University|author=Pentti Kanerva|year=1993}}</ref><ref name=stanford>{{
important property of such high dimensional spaces is that two randomly chosen vectors are relatively far away from each other, meaning that they are uncorrelated.<ref name=integerSDM/> SDM can be considered a realization of [[
The underlying idea behind a SDM is the mapping of a huge binary memory onto a smaller set of physical locations, so-called ''hard locations''. As a general guideline, those hard locations should be uniformly distributed in the [[virtual memory|virtual space]], to mimic the existence of the larger virtual space as accurately as possible. Every datum is stored distributed by a set of hard locations, and retrieved by averaging those locations. Therefore, recall may not be perfect, accuracy depending on the saturation of the memory.
Kanerva's proposal is based on four basic ideas:<ref>Mendes, Mateus Daniel Almeida. "Intelligent robot navigation using a sparse distributed memory." Phd thesis, (2010). URL: https://eg.sib.uc.pt/handle/10316/17781 {{Webarchive|url=https://web.archive.org/web/20160304073500/https://eg.sib.uc.pt/handle/10316/17781 |date=2016-03-04 }}</ref>
===The binary space N ===
The SDM works with n-dimensional vectors with binary components. Depending on the context, the vectors are called points, patterns, addresses, words, memory items, data, or events. This section is mostly about the properties of the vector space N = <math>\{0,1\}^n</math>. Let n be number of dimensions of the space. The number of points, or possible memory items, is then <math>2^n</math>. We will denote this number by N and will use N and <math>2^n</math> to stand also for the space itself.<ref name="greb2">Grebeníček, František. "Sparse Distributed Memory− Pattern Data Analysis. URL: http://www.fit.vutbr.cz/~grebenic/Publikace/mosis2000.pdf"</ref>
Line 36 ⟶ 43:
The space N can be represented by the vertices of the unit cube in n-dimensional [[Euclidean space]]. The vertices lie on the surface of an n-dimensional sphere with (Euclidean-metric) radius <math>\sqrt{n}/2 </math>. This gives rise to the [[sphere]] analogy. We will call a space spherical if
The surface of a sphere (in Euclidean 3d-space) clearly is spherical. According to definition, N is also spherical, since y ⊕ x ⊕ (…) is an automorphism that maps x to y.
Line 58 ⟶ 65:
===As neural network===
The SDM may be regarded either as a [[content-addressable memory|content-addressable]] extension of a classical [[random-access memory]] (RAM) or as a special type of three layer [[feedforward neural network]]. The main SDM alterations to the RAM are:<ref name="Grebenıcek">Grebenıcek, František. Neural Nets as Associative Memories. Diss. Brno University of Technology, 2001. URL: http://www.vutium.vutbr.cz/tituly/pdf/ukazka/80-214-1914-8.pdf {{Webarchive|url=https://web.archive.org/web/20160304132149/http://www.vutium.vutbr.cz/tituly/pdf/ukazka/80-214-1914-8.pdf |date=2016-03-04 }}</ref>
*The SDM calculates [[Hamming distance]]s between the reference address and each ___location address. For each distance which is less or equal to the given radius the corresponding ___location is selected.
Line 125 ⟶ 132:
* a contents portion that is M-bits wide and that can accumulate multiple M-bit data patterns written into the ___location. The contents' portion is not fixed; it is modified by data patterns written into the memory.
In SDM a word could be stored in memory by writing it in a free storage ___location and at the same time providing the ___location with the appropriate address decoder. A neuron as an address decoder would select a ___location based on similarity of the ___location's address to the retrieval cue. Unlike conventional [[Turing machines]] SDM is taking advantage of ''[[parallel computing]] by the address decoders''. The mere ''accessing the memory'' is regarded as computing, the amount of which increases with memory size.<ref name="book"/>
====Address pattern====
Line 145 ⟶ 152:
===Critical distance===
Kanerva's model of memory has a concept of a ''critical point'': prior to this point, a previously stored item can be easily retrieved; but beyond this point an item cannot be retrieved. Kanerva has methodically calculated this point for a particular set of (fixed) parameters.
The corresponding ''critical distance'' of a Sparse Distributed Memory can be approximately evaluated minimizing the following equation with the restriction <math>d \in N </math> and <math>d \leqslant n</math>. The proof can be found in,<ref name="msbrogli">{{Cite thesis |author=Brogliato, Marcelo Salhab |title=Understanding Critical Distance in Sparse Distributed Memory |year=2012 |hdl=10438/13095 }}</ref><ref>{{cite journal|last1=Brogliato|first1=Marcelo Salhab|last2=Chada|first2=Daniel de Magalhães|last3=Linhares|first3=Alexandre|title=Sparse Distributed Memory: understanding the speed and robustness of expert memory|journal=Frontiers in Human Neuroscience|date=2014|volume=8|issue=222|pages=222|doi=10.3389/fnhum.2014.00222|pmid=24808842|pmc=4009432|doi-access=free }}</ref>
<math>
Line 153 ⟶ 160:
Where:
* <math>d</math>: is the distance to the target;
* <math>
* <math>
* <math>w</math>: is the number of times the target bitstring was written in memory;
* <math>\theta</math>: is the total of random bitstrings in all <math>h</math> hard-locations activated by a read operation; i.e., the size of a cell assembly;
* <math>shared(d)</math>: is the mean number of shared hard-locations activated by two bitstrings <math>d</math> bits away from each other. One can find some values for a 1000-dimensional SDM in Kanerva's book, Table 7.1, p. 63, or the equations to calculate to any SDM in Appendix B, p. 125 of the same book.
==Probabilistic interpretation==
An [[associative memory (psychology)|associative memory]] system using [[Hierarchical temporal memory#Sparse distributed representations|sparse, distributed representations]] can be reinterpreted as an [[Importance sampling|importance sampler]], a [[Monte Carlo method|Monte
Carlo]] method of approximating [[Bayesian inference]].<ref>Abbott, Joshua T., Jessica B. Hamrick, and Thomas L. Griffiths. "[https://web.archive.org/web/20170911115555/https://pdfs.semanticscholar.org/7f50/8bb0bf0010884a4be72f2774635514fc58ec.pdf Approximating Bayesian inference with a sparse distributed memory system]." Proceedings of the 35th annual conference of the cognitive science society. 2013.</ref> The SDM can be considered a Monte Carlo approximation to a multidimensional [[conditional probability]] integral. The SDM will produce acceptable responses from a training set when this approximation is valid, that is, when the training set contains sufficient data to provide good estimates of the underlying [[Joint probability distribution|joint probabilities]] and there are enough Monte Carlo samples to obtain an accurate estimate of the integral.<ref>{{cite book|doi=10.1109/ijcnn.1989.118597|chapter=A conditional probability interpretation of Kanerva's sparse distributed memory|title=International Joint Conference on Neural Networks|pages=415–417|volume=1|year=1989|last1=Anderson|s2cid=13935339}}</ref>
Line 167 ⟶ 173:
[[Sparse coding]] may be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such task requires implementing stimulus-specific [[associative memory (psychology)|associative memories]] in which only a few neurons out of a [[Neural ensemble|population]] respond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli.
Theoretical work on SDM by Kanerva has suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations. Experimentally, sparse representations of sensory information have been observed in many systems, including vision,<ref>{{cite journal | last1 = Vinje | first1 = WE | last2 = Gallant | first2 = JL | year = 2000 | title = Sparse coding and decorrelation in primary visual cortex during natural vision | url = https://pdfs.semanticscholar.org/3efc/4ac8f70edde57661b908105f4fd21a43fbab.pdf | archive-url = https://web.archive.org/web/20170911115737/https://pdfs.semanticscholar.org/3efc/4ac8f70edde57661b908105f4fd21a43fbab.pdf | url-status = dead | archive-date = 2017-09-11 | journal = Science | volume = 287 | issue = 5456| pages = 1273–1276 | pmid = 10678835 | doi = 10.1126/science.287.5456.1273 | citeseerx = 10.1.1.456.2467 | bibcode = 2000Sci...287.1273V | s2cid = 13307465 }}</ref> audition,<ref>{{cite journal | last1 = Hromádka | first1 = T | last2 = Deweese | first2 = MR | last3 = Zador | first3 = AM | year = 2008 | title = Sparse representation of sounds in the unanesthetized auditory cortex | journal = PLOS Biol | volume = 6 | issue = 1| page = e16 | pmid = 18232737 | doi=10.1371/journal.pbio.0060016 | pmc=2214813 | doi-access = free }}</ref> touch,<ref>{{cite journal | last1 = Crochet | first1 = S | last2 = Poulet | first2 = JFA | last3 = Kremer | first3 = Y | last4 = Petersen | first4 = CCH | year = 2011 | title = Synaptic mechanisms underlying sparse coding of active touch | journal = Neuron | volume = 69 | issue = 6| pages = 1160–1175 | pmid = 21435560 | doi=10.1016/j.neuron.2011.02.022| s2cid = 18528092 | doi-access = free }}</ref> and olfaction.<ref>{{cite journal | last1 = Ito | first1 = I | last2 = Ong | first2 = RCY | last3 = Raman | first3 = B | last4 = Stopfer | first4 = M | year = 2008 | title = Sparse odor representation and olfactory learning | journal = Nat Neurosci | volume = 11 | issue = 10| pages = 1177–1184 | pmid = 18794840 | pmc=3124899 | doi=10.1038/nn.2192}}</ref> However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been lacking until recently.
Some progress has been made in 2014 by [[Gero Miesenböck]]'s lab at the [[University of Oxford]] analyzing [[Drosophila]] [[Olfactory system]].<ref>A sparse memory is a precise memory. Oxford Science blog. 28 Feb 2014. http://www.ox.ac.uk/news/science-blog/sparse-memory-precise-memory</ref>
In Drosophila, sparse odor coding by the [[Kenyon cell]]s of the [[Mushroom bodies|mushroom body]] is thought to generate a large number of precisely addressable locations for the storage of odor-specific memories. Lin et al.<ref>{{cite journal | last1 = Lin | first1 = Andrew C. |display-authors=etal | year = 2014 | title = Sparse, decorrelated odor coding in the mushroom body enhances learned odor discrimination | journal = Nature Neuroscience | volume = 17 | issue = 4| pages = 559–568 | pmc=4000970 | pmid=24561998 | doi=10.1038/nn.3660}}</ref> demonstrated that sparseness is controlled by a negative feedback circuit between Kenyon cells and the [[GABAergic]] anterior paired lateral (APL) neuron. Systematic activation and blockade of each leg of this feedback circuit show that Kenyon cells activate APL and APL inhibits Kenyon cells. Disrupting the Kenyon cell-APL feedback loop decreases the sparseness of Kenyon cell odor responses, increases inter-odor correlations, and prevents flies from learning to discriminate similar, but not dissimilar, odors. These results suggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor-specificity of memories. A 2017 publication in [[Science_(journal)|Science]]<ref>{{cite journal|doi=10.1126/science.aam9868|pmid=29123069|title=A neural algorithm for a fundamental computing problem|journal=Science|volume=358|issue=6364|pages=793–796|year=2017|last1=Dasgupta|first1=Sanjoy|last2=Stevens|first2=Charles F.|last3=Navlakha|first3=Saket|bibcode=2017Sci...358..793D|doi-access=free}}</ref> showed that fly olfactory circuit implements an improved version of binary [[locality sensitive hashing]] via sparse, random projections.
==Applications==
Line 211 ⟶ 212:
|}
|}
At the University of Memphis, Uma Ramamurthy, Sidney K. D'Mello, and Stan Franklin created a modified version of the sparse distributed memory system that represents "realizing forgetting." It uses a decay equation to better show interference in data. The sparse distributed memory system distributes each pattern into approximately one hundredth of the locations,{{Clarify|date=March 2013}} so interference can have detrimental results.<ref name=memphis>{{cite web|title=Realizing Forgetting in a Modified Sparse Distributed Memory System|url=http://csjarchive.cogsci.rpi.edu/proceedings/2006/docs/p1992.pdf|work=Computer Science Department and The Institute for Intelligent Systems|publisher=The University of Memphis|access-date=1 November 2011|author1=Uma Ramamurthy
Two possible examples of decay from this modified sparse distributed memory are presented
Line 226 ⟶ 227:
Ashraf Anwar, Stan Franklin, and Dipankar Dasgupta at The University of Memphis; proposed a model for SDM initialization using Genetic Algorithms and Genetic Programming (1999).
[[Genetic memory (computer science)|Genetic memory]] uses genetic algorithm and sparse distributed memory as a pseudo artificial neural network. It has been considered for use in creating artificial life.<ref name="Rocha">{{cite journal |vauthors=Rocha LM, Hordijk W |title=Material representations: From the genetic code to the evolution of cellular automata |journal=Artificial Life |volume=11 |issue=
===Statistical prediction===
SDM has been applied to statistical [[prediction]], the task of associating extremely large perceptual state vectors with future events. In conditions of near- or over- capacity, where the associative memory behavior of the model
breaks down, the processing performed by the model can be interpreted as that of a statistical predictor and each data counter in an SDM can be viewed as an independent estimate of the conditional probability of a binary function f being equal to the activation set defined by the counter's memory ___location.<ref>Rogers, David. [https://proceedings.neurips.cc/paper/1988/file/9b8619251a19057cff70779273e95aa6-Paper.pdf "Statistical prediction with Kanerva's sparse distributed memory."] Advances in neural information processing systems. 1989.</ref>
===Artificial general intelligence===
{{see also|Cognitive architecture|Artificial general intelligence}}
*[[LIDA (cognitive architecture)|LIDA]] uses sparse distributed memory to help model [[cognition]] in biological systems. The sparse distributed memory places space is recalling or recognizing the object that it has in relation to other objects. It was developed by Stan Franklin, the creator of the "realizing forgetting" modified sparse distributed memory system.<ref name=psdm>{{cite journal | last1 = Rao | first1 = R. P. N. | last2 = Fuentes | first2 = O. | year = 1998 | title = Hierarchical Learning of Navigational Behaviors in an Autonomous Robot using a Predictive Sparse Distributed Memory
*[[CMatie]] is a [[Artificial consciousness|'conscious']] software agent developed to manage seminar announcements in the Mathematical Sciences Department at the [[University of Memphis]]. It
*[[Hierarchical temporal memory]] utilizes SDM for storing sparse distributed representations of the data.
===Reinforcement learning===
SDMs provide a linear, local [[function approximation]] scheme, designed to work when a very large/high-dimensional input (address) space has to be mapped into a much smaller [[Computer memory|physical memory]]. In general, local architectures, SDMs included, can be subject to the [[curse of dimensionality]], as some target functions may require, in the worst case, an exponential number of local units to be approximated accurately across the entire input space. However, it is widely believed that most [[Decision support system|decision-making systems]] need high accuracy only around low-dimensional [[manifolds]] of the [[state space]], or important state "highways".<ref>Ratitch, Bohdana, Swaminathan Mahadevan, and [[Doina Precup]]. "Sparse distributed memories in reinforcement learning: Case studies." Proc. of the Workshop on Learning and Planning in Markov Processes-Advances and Challenges. 2004.</ref> The work in Ratitch et al.<ref>Ratitch, Bohdana, and Doina Precup. "[http://www.cs.mcgill.ca/~dprecup/temp/ecml2004.pdf Sparse distributed memories for on-line value-based reinforcement learning] {{Webarchive|url=https://web.archive.org/web/20150824061329/http://www.cs.mcgill.ca/~dprecup/temp/ecml2004.pdf |date=2015-08-24 }}." Machine Learning: ECML 2004. Springer Berlin Heidelberg, 2004. 347-358.</ref> combined the SDM memory model with the ideas from [[instance-based learning|memory-based learning]], which provides an approximator that can dynamically adapt its structure and resolution in order to locate regions of the state space that are "more interesting"<ref>Bouchard-Côté, Alexandre. "[https://www.stat.ubc.ca/~bouchard/pub/report-ml.pdf Sparse Memory Structures Detection]." (2004).</ref> and allocate proportionally more memory resources to model them accurately.
===Object indexing in computer vision===
Line 248:
Many extensions and improvements to SDM have been proposed, e.g.:
* Ternary memory space: This enables the memory to be used as a Transient Episodic Memory (TEM) in [[Cognitive architecture|cognitive software agents]]. TEM is a memory with high specificity and low retention, used for events having features of a particular time and place.<ref>D'Mello, Sidney K., Ramamurthy, U., & Franklin, S. 2005. [http://escholarship.org/uc/item/2b78w526.pdf Encoding and Retrieval Efficiency of Episodic Data in a Modified Sparse Distributed Memory System]. In Proceedings of the 27th Annual Meeting of the Cognitive Science Society. Stresa, Ital</ref><ref>Ramamaurthy, U., Sidney K. D'Mello, and Stan Franklin. "[https://www.academia.edu/download/43397052/modifed_20sparse_20Distributed_20Memory_20as_20TSM_20for_20CSA.pdf Modified sparse distributed memory as transient episodic memory for cognitive software agents]{{dead link|date=July 2022|bot=medic}}{{cbignore|bot=medic}}." Systems, Man and Cybernetics, 2004 IEEE International Conference on. Vol. 6. IEEE, 2004.</ref>
* Integer SDM that uses modular arithmetic integer vectors rather than binary vectors. This extension improves the representation capabilities of the memory and is more robust over normalization. It can also be extended to support forgetting and reliable sequence storage.<ref name="integerSDM">Snaider, Javier, and Stan Franklin. "[http://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS12/paper/viewFile/4409/4781 Integer sparse distributed memory] {{Webarchive|url=https://web.archive.org/web/20210802030951/https://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS12/paper/viewFile/4409/4781 |date=2021-08-02 }}." Twenty-fifth international flairs conference. 2012.</ref>
* Using word vectors of larger size than address vectors: This extension preserves many of the desirable properties of the original SDM: auto-associability, content addressability, distributed storage and robustness over noisy inputs. In addition, it adds new functionality, enabling an efficient auto-associative storage of sequences of vectors, as well as of other data structures such as trees.<ref>{{cite journal | last1 = Snaider | first1 = Javier | last2 = Franklin | first2 = Stan | year = 2012 | title = Extended sparse distributed memory and sequence storage
* Constructing SDM from [[Biological neuron model|Spiking Neurons]]: Despite the biological likeness of SDM most of the work undertaken to demonstrate its capabilities to date has used highly artificial neuron models which abstract away the actual behaviour of [[neurons]] in the [[brain]]. Recent work by [[Steve Furber]]'s lab at the [[University of Manchester]]<ref>{{cite journal | last1 = Furber | first1 = Steve B. |display-authors=etal | year = 2004 | title = Sparse distributed memory using N-of-M codes | journal = Neural Networks | volume = 17 | issue = 10| pages = 1437–1451 | doi=10.1016/j.neunet.2004.07.003| pmid = 15541946 }}</ref><ref>Sharp, Thomas: "[https://studentnet.cs.manchester.ac.uk/resources/library/thesis_abstracts/MSc09/FullText/SharpThomas.pdf Application of sparse distributed memory to the Inverted Pendulum Problem]". Diss. University of Manchester, 2009. URL: http://studentnet.cs.manchester.ac.uk/resources/library/thesis_abstracts/MSc09/FullText/SharpThomas.pdf</ref><ref>Bose, Joy. [https://www.academia.edu/download/7385022/bose07_phd.pdf Engineering a Sequence Machine Through Spiking Neurons Employing Rank-order Codes]{{dead link|date=July 2022|bot=medic}}{{cbignore|bot=medic}}. Diss. University of Manchester, 2007.</ref> proposed adaptations to SDM, e.g. by incorporating N-of-M rank codes<ref>Simon Thorpe and Jacques Gautrais. [https://www.researchgate.net/profile/Jacques-Gautrais/publication/285068799_Rank_order_coding_Computational_neuroscience_trends_in_research/links/587ca2e108ae4445c069772a/Rank-order-coding-Computational-neuroscience-trends-in-research.pdf Rank order coding.] In Computational Neuroscience: Trends in research, pages 113–118. Plenum Press, 1998.</ref><ref>{{cite journal | last1 = Furber | first1 = Stephen B. |display-authors=etal | year = 2007 | title = Sparse distributed memory using rank-order neural codes | journal = IEEE Transactions on Neural Networks| volume = 18 | issue = 3| pages = 648–659 | doi=10.1109/tnn.2006.890804| pmid = 17526333 | citeseerx = 10.1.1.686.6196 | s2cid = 14256161 }}</ref> into how [[Neural coding#Population coding|populations of neurons]] may encode information—which may make it possible to build an SDM variant from biologically plausible components. This work has been incorporated into [[SpiNNaker|SpiNNaker (Spiking Neural Network Architecture)]] which is being used as the [[Neuromorphic engineering|Neuromorphic Computing]] Platform for the [[Human Brain Project]].<ref>{{cite journal | last1 = Calimera | first1 = A | last2 = Macii | first2 = E | last3 = Poncino | first3 = M | year = 2013 | title = The Human Brain Project and neuromorphic computing | journal = Functional Neurology | volume = 28 | issue = 3| pages = 191–6 | pmid = 24139655 | pmc=3812737}}</ref>
* Non-random distribution of locations:<ref>{{cite journal | last1 = Hely | first1 = Tim | last2 = Willshaw | first2 = David J. | last3 = Hayes | first3 = Gillian M. | year = 1997 | title = A new approach to Kanerva's sparse distributed memory
* SDMSCue (Sparse Distributed Memory for Small Cues): Ashraf Anwar & Stan Franklin at The University of Memphis, introduced a variant of SDM capable of Handling Small Cues; namely SDMSCue in 2002. The key idea is to use multiple Reads/Writes, and space projections to reach a successively longer cue.<ref>{{Cite book|title = A Sparse Distributed Memory Capable of Handling Small Cues, SDMSCue|publisher = Springer US|date = 2005-01-01|isbn = 978-0-387-24048-0|pages = 23–38|series = IFIP — The International Federation for Information Processing|language = en|first1 = Ashraf|last1 = Anwar|first2 = Stan|last2 = Franklin|editor-first = Michael K.|editor-last = Ng|editor-first2 = Andrei|editor-last2 = Doncescu|editor-first3 = Laurence T.|editor-last3 = Yang|editor-first4 = Tau|editor-last4 = Leng|doi = 10.1007/0-387-24049-7_2| s2cid=10290721 }}</ref>
==Related patents==
* Method and apparatus for a sparse distributed memory system US 5113507 A, [[Universities Space Research Association]], 1992<ref>Method and apparatus for a sparse distributed memory system US 5113507 A, by Louis A. Jaeckel, Universities Space Research Association, 1992, URL:
* Method and device for storing and recalling information implementing a kanerva memory system US 5829009 A, [[Texas Instruments]], 1998<ref>Method and device for storing and recalling information implementing a kanerva memory system US 5829009 A, by Gary A. Frazier, Texas Instruments Incorporated, 1998, URL: https://
* Digital memory, Furber, Stephen. US 7512572 B2, 2009<ref>Furber, Stephen B. "Digital memory." U.S. Patent No. 7,512,572. 31 Mar. 2009.___URL: https://
==Implementation==
{{external links|section|date=February 2023}}
* C Binary Vector Symbols (CBVS): includes SDM implementation in [[C (programming language)|C]] as a part of [[vector symbolic architecture]]<ref>{{cite journal|doi=10.1016/j.bica.2014.11.015|title=Vector space architecture for emergent interoperability of systems by learning from demonstration|journal=Biologically Inspired Cognitive Architectures|volume=11|pages=53–64|year=2015|last1=Emruli|first1=Blerim|last2=Sandin|first2=Fredrik|last3=Delsing|first3=Jerker|url=http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-4068 }}</ref> developed by EISLAB at [[Luleå University of Technology]]: http://pendicular.net/cbvs.php {{Webarchive|url=https://web.archive.org/web/20150925123906/http://pendicular.net/cbvs.php |date=2015-09-25 }}<ref>{{cite journal | last1 = Emruli | first1 = Blerim | last2 = Sandin | first2 = Fredrik | year = 2014 | title = Analogical mapping with sparse distributed memory: A simple model that learns to generalize from examples | url = http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-14994| journal = Cognitive Computation | volume = 6 | issue = 1| pages = 74–88 | doi=10.1007/s12559-013-9206-3| s2cid = 12139021 }}</ref>
* CommonSense ToolKit (CSTK) for realtime sensor data processing developed at the [[Lancaster University]] includes implementation of SDM in [[C++]]:
*[[Julia (programming language)|Julia]] implementation by [[Brian Hayes (scientist)|Brian Hayes]]: https://github.com/bit-player/sdm-julia <ref>The Mind Wanders by B. Hayes, 2018. url: http://bit-player.org/2018/the-mind-wanders</ref>
* [[LIDA (cognitive architecture)|Learning Intelligent Distribution Agent (LIDA)]] developed by [[Stan Franklin]]'s lab at the [[University of Memphis]] includes implementation of SDM in [[Java (programming language)|Java]]: http://ccrg.cs.memphis.edu/framework.html
* [[Python (programming language)|Python]] implementation: https://github.com/msbrogli/sdm<ref name="PMC4009432">{{cite journal | last1 = Brogliato | first1 = Marcelo S. | last2 = Chada | first2 = Daniel M. | last3 = Linhares | first3 = Alexandre | year = 2014| title = Sparse distributed memory: understanding the speed and robustness of expert memory | journal = Frontiers in Human Neuroscience | volume = 8| page = 222| pmc=4009432 | pmid=24808842 | doi=10.3389/fnhum.2014.00222| doi-access = free }}</ref>
* [[Python (programming language)|Python]] and [[OpenCL]] implementation: https://github.com/msbrogli/sdm-framework<ref name="PMC4009432"/>
* [[APL (programming language)|APL]] implementation<ref>{{cite journal|doi=10.1145/144052.144142|title=WSDM: Weighted sparse distributed memory prototype expressed in APL|journal=ACM SIGAPL APL Quote Quad|volume=23|pages=235–242|year=1992|last1=Surkan|first1=Alvin J.}}</ref>
* [[LISP]] implementation for the [[Connection Machine]]<ref>Turk, Andreas, and Günther Görz. "Kanerva's sparse distributed memory: an object-oriented implementation on the connection machine." IJCAI. 1995.</ref>
* [[FPGA]] implementation<ref>{{cite journal | last1 = Silva | last2 = Tadeu Pinheiro | first2 = Marcus | last3 = Pádua Braga | first3 = Antônio | last4 = Soares Lacerda | first4 = Wilian | year = 2004 | title = Reconfigurable co-processor for kanerva's sparse distributed memory | url = https://www.researchgate.net/publication/220055514 | format = PDF | journal = Microprocessors and Microsystems | volume = 28 | issue = 3| pages = 127–134 | doi = 10.1016/j.micpro.2004.01.003 }}</ref>
* The original hardware implementation developed by [[NASA]]<ref name="flynn89"/>
*An Implementation in C done at the Research Institute for Advanced Computer Science at NASA Ames<ref>{{Cite web|url=https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19870017134.pdf|title=Two Demonstrators and a Simulator for a Sparse Distributed Memory|last=Brown|first=Robert L.|date=June 1987|website=NASA Technical Reports Archive}}</ref>
==
* [[Autoassociative memory]]
* [[Types of artificial neural networks#Dynamic|Dynamic memory networks]]
▲* [[Cerebellar model articulation controller|Associative-memory models of the cerebellum]]
* [[Deep learning#Memory networks|Deep learning § Memory networks]]▼
* [[Holographic associative memory]]
* [[Low-density parity-check code]]
▲* [[
* [[Memory-prediction framework]]
* [[Neural coding]]
* [[Neural Turing machine]]
* [[Random indexing]]
* [[Self-organizing map]]
* [[Semantic folding]]
* [[Semantic memory]]
* [[Semantic network]]
* Stacked [[autoencoder]]s
* [[Visual indexing theory]]
▲* [[Vector space model]]
==References==
|