Sparse distributed memory: Difference between revisions

Content deleted Content added
Bender the Bot (talk | contribs)
m Implementation: HTTP to HTTPS for SourceForge
 
(28 intermediate revisions by 7 users not shown)
Line 1:
{{Short description|Mathematical model of memory}}
'''Sparse distributed memory''' ('''SDM''') is a mathematical model of human [[long-term memory]] introduced by [[Pentti Kanerva]] in 1988 while he was at [[Ames Research Center|NASA Ames Research Center]].<ref name="book" />
 
Line 11 ⟶ 12:
 
==Definition==
Human memory has a tendency to [[Multiple trace theory|congregate memories]] based on similarities between them (although they may not be related), such as "firetrucks are red and apples are red".<ref name=ship>{{cite web|title=General Psychology|url=http://webspace.ship.edu/cgboer/memory.html|publisher=Shippensburg University|author=C. George Boeree|year=2002}}</ref> Sparse distributed memory is a mathematical representation of human memory, and uses [[Clustering high-dimensional data|high-dimensional space]] to help model the large amounts of memory that mimics that of the human neural network.<ref name=psu>{{cite journal|title=Sparse Distributed Memory and Related Models|pages=50–76|citeseerx=10.1.1.2.8403|publisher=Pennsylvania State University|author=Pentti Kanerva|year=1993}}</ref><ref name=stanford>{{citeCite FTP web|title=Sparse Distributed Memory: Principles and Operation|url=ftp://reports.stanford.edu/pub/cstr/reports/csl/tr/89/400/CSL-TR-89-400.pdf|publisher=Stanford University|access-date=1 November 2011|author1=M. J. Flynn|author2=P. Kanerva|author3=N. Bhadkamkar|name-list-style=amp|dateserver=DecemberStanford 1989}}{{University|url-status=dead link|date=May 2018 |bot=InternetArchiveBot |fix-attempted=yesDecember 1989}}</ref> An
important property of such high dimensional spaces is that two randomly chosen vectors are relatively far away from each other, meaning that they are uncorrelated.<ref name=integerSDM/> SDM can be considered a realization of [[Localitylocality-sensitive hashing]].
 
The underlying idea behind a SDM is the mapping of a huge binary memory onto a smaller set of physical locations, so-called ''hard locations''. As a general guideline, those hard locations should be uniformly distributed in the [[virtual memory|virtual space]], to mimic the existence of the larger virtual space as accurately as possible. Every datum is stored distributed by a set of hard locations, and retrieved by averaging those locations. Therefore, recall may not be perfect, accuracy depending on the saturation of the memory.
 
Kanerva's proposal is based on four basic ideas:<ref>Mendes, Mateus Daniel Almeida. "Intelligent robot navigation using a sparse distributed memory." Phd thesis, (2010). URL: https://eg.sib.uc.pt/handle/10316/17781 {{Webarchive|url=https://web.archive.org/web/20160304073500/https://eg.sib.uc.pt/handle/10316/17781 |date=2016-03-04 }}</ref>
 
# The boolean space <math> \{0,1\}^n</math>, or <math>2^n</math> points in <math>10^0 < n < 10^5</math> dimensions, exhibits properties which are similar to humans' intuitive notions of relationships between the concepts. This means that it makes sense to store data as points of the mentioned space where each memory item is stored as an n-bit vector.
# Neurons with n inputs can be used as address decoders of a random-access memory
# Unifying principle: data stored into the memory can be used as addresses to the same memory. Distance between two points is a measure of similarity between two memory items. The closer the points, the more similar the stored vectors.
# Time can be traced in the memory as a function of where the data are stored, if the data are organized as sequences of events.
 
===The binary space N ===
* [[{{further|Vector space model]]}}
The SDM works with n-dimensional vectors with binary components. Depending on the context, the vectors are called points, patterns, addresses, words, memory items, data, or events. This section is mostly about the properties of the vector space N = <math>\{0,1\}^n</math>. Let n be number of dimensions of the space. The number of points, or possible memory items, is then <math>2^n</math>. We will denote this number by N and will use N and <math>2^n</math> to stand also for the space itself.<ref name="greb2">Grebeníček, František. "Sparse Distributed Memory− Pattern Data Analysis. URL: http://www.fit.vutbr.cz/~grebenic/Publikace/mosis2000.pdf"</ref>
 
Line 63 ⟶ 65:
 
===As neural network===
The SDM may be regarded either as a [[content-addressable memory|content-addressable]] extension of a classical [[random-access memory]] (RAM) or as a special type of three layer [[feedforward neural network]]. The main SDM alterations to the RAM are:<ref name="Grebenıcek">Grebenıcek, František. Neural Nets as Associative Memories. Diss. Brno University of Technology, 2001. URL: http://www.vutium.vutbr.cz/tituly/pdf/ukazka/80-214-1914-8.pdf {{Webarchive|url=https://web.archive.org/web/20160304132149/http://www.vutium.vutbr.cz/tituly/pdf/ukazka/80-214-1914-8.pdf |date=2016-03-04 }}</ref>
 
*The SDM calculates [[Hamming distance]]s between the reference address and each ___location address. For each distance which is less or equal to the given radius the corresponding ___location is selected.
Line 130 ⟶ 132:
* a contents portion that is M-bits wide and that can accumulate multiple M-bit data patterns written into the ___location. The contents' portion is not fixed; it is modified by data patterns written into the memory.
 
In SDM a word could be stored in memory by writing it in a free storage ___location and at the same time providing the ___location with the appropriate address decoder. A neuron as an address decoder would select a ___location based on similarity of the ___location's address to the retrieval cue. Unlike conventional [[Turing machines]] SDM is taking advantage of ''[[parallel computing]] by the address decoders''. The mere ''accessing the memory'' is regarded as computing, the amount of which increases with memory size.<ref name="book"/>
 
====Address pattern====
Line 165 ⟶ 167:
 
==Probabilistic interpretation==
An [[associative memory (psychology)|associative memory]] system using [[Hierarchical temporal memory#Sparse distributed representations|sparse, distributed representations]] can be reinterpreted as an [[Importance sampling|importance sampler]], a [[Monte Carlo method|Monte
Carlo]] method of approximating [[Bayesian inference]].<ref>Abbott, Joshua T., Jessica B. Hamrick, and Thomas L. Griffiths. "[https://web.archive.org/web/20170911115555/https://pdfs.semanticscholar.org/7f50/8bb0bf0010884a4be72f2774635514fc58ec.pdf Approximating Bayesian inference with a sparse distributed memory system]." Proceedings of the 35th annual conference of the cognitive science society. 2013.</ref> The SDM can be considered a Monte Carlo approximation to a multidimensional [[conditional probability]] integral. The SDM will produce acceptable responses from a training set when this approximation is valid, that is, when the training set contains sufficient data to provide good estimates of the underlying [[Joint probability distribution|joint probabilities]] and there are enough Monte Carlo samples to obtain an accurate estimate of the integral.<ref>{{cite book|doi=10.1109/ijcnn.1989.118597|chapter=A conditional probability interpretation of Kanerva's sparse distributed memory|title=International Joint Conference on Neural Networks|pages=415–417|volume=1|year=1989|last1=Anderson|s2cid=13935339}}</ref>
 
Line 171 ⟶ 173:
[[Sparse coding]] may be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such task requires implementing stimulus-specific [[associative memory (psychology)|associative memories]] in which only a few neurons out of a [[Neural ensemble|population]] respond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli.
 
Theoretical work on SDM by Kanerva has suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations. Experimentally, sparse representations of sensory information have been observed in many systems, including vision,<ref>{{cite journal | last1 = Vinje | first1 = WE | last2 = Gallant | first2 = JL | year = 2000 | title = Sparse coding and decorrelation in primary visual cortex during natural vision | url = https://pdfs.semanticscholar.org/3efc/4ac8f70edde57661b908105f4fd21a43fbab.pdf | archive-url = https://web.archive.org/web/20170911115737/https://pdfs.semanticscholar.org/3efc/4ac8f70edde57661b908105f4fd21a43fbab.pdf | url-status = dead | archive-date = 2017-09-11 | journal = Science | volume = 287 | issue = 5456| pages = 1273–1276 | pmid = 10678835 | doi = 10.1126/science.287.5456.1273 | citeseerx = 10.1.1.456.2467 | bibcode = 2000Sci...287.1273V | s2cid = 13307465 }}</ref> audition,<ref>{{cite journal | last1 = Hromádka | first1 = T | last2 = Deweese | first2 = MR | last3 = Zador | first3 = AM | year = 2008 | title = Sparse representation of sounds in the unanesthetized auditory cortex | journal = PLOS Biol | volume = 6 | issue = 1| page = e16 | pmid = 18232737 | doi=10.1371/journal.pbio.0060016 | pmc=2214813 | doi-access = free }}</ref> touch,<ref>{{cite journal | last1 = Crochet | first1 = S | last2 = Poulet | first2 = JFA | last3 = Kremer | first3 = Y | last4 = Petersen | first4 = CCH | year = 2011 | title = Synaptic mechanisms underlying sparse coding of active touch | journal = Neuron | volume = 69 | issue = 6| pages = 1160–1175 | pmid = 21435560 | doi=10.1016/j.neuron.2011.02.022| s2cid = 18528092 | doi-access = free }}</ref> and olfaction.<ref>{{cite journal | last1 = Ito | first1 = I | last2 = Ong | first2 = RCY | last3 = Raman | first3 = B | last4 = Stopfer | first4 = M | year = 2008 | title = Sparse odor representation and olfactory learning | journal = Nat Neurosci | volume = 11 | issue = 10| pages = 1177–1184 | pmid = 18794840 | pmc=3124899 | doi=10.1038/nn.2192}}</ref> However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been lacking until recently.
 
Some progress has been made in 2014 by [[Gero Miesenböck]]'s lab at the [[University of Oxford]] analyzing [[Drosophila]] [[Olfactory system]].<ref>A sparse memory is a precise memory. Oxford Science blog. 28 Feb 2014. http://www.ox.ac.uk/news/science-blog/sparse-memory-precise-memory</ref>
Line 225 ⟶ 227:
Ashraf Anwar, Stan Franklin, and Dipankar Dasgupta at The University of Memphis; proposed a model for SDM initialization using Genetic Algorithms and Genetic Programming (1999).
 
[[Genetic memory (computer science)|Genetic memory]] uses genetic algorithm and sparse distributed memory as a pseudo artificial neural network. It has been considered for use in creating artificial life.<ref name="Rocha">{{cite journal |vauthors=Rocha LM, Hordijk W |title=Material representations: From the genetic code to the evolution of cellular automata |journal=Artificial Life |volume=11 |issue= 1–2 |pages=189–214 |year=2005 |pmid= 15811227 |doi=10.1162/1064546053278964 |url=http://informatics.indiana.edu/rocha/caalife04.html |citeseerx=10.1.1.115.6605 |s2cid=5742197 |access-date=2013-08-02 |archive-date=2013-09-20 |archive-url=https://web.archive.org/web/20130920205232/http://informatics.indiana.edu/rocha/caalife04.html |url-status=dead }}</ref>
 
===Statistical prediction===
Line 233 ⟶ 235:
===Artificial general intelligence===
{{see also|Cognitive architecture|Artificial general intelligence}}
*[[LIDA (cognitive architecture)|LIDA]] uses sparse distributed memory to help model [[cognition]] in biological systems. The sparse distributed memory places space is recalling or recognizing the object that it has in relation to other objects. It was developed by Stan Franklin, the creator of the "realizing forgetting" modified sparse distributed memory system.<ref name=psdm>{{cite journal | last1 = Rao | first1 = R. P. N. | last2 = Fuentes | first2 = O. | year = 1998 | title = Hierarchical Learning of Navigational Behaviors in an Autonomous Robot using a Predictive Sparse Distributed Memory | url = http://www.cs.utep.edu/ofuentes/raoML98.pdf | journal = Machine Learning | volume = 31 | pages = 87–113 | doi = 10.1023/a:1007492624519 | s2cid = 8305178 | doi-access = free | access-date = 2011-11-10 | archive-date = 2017-08-10 | archive-url = https://web.archive.org/web/20170810103354/http://www.cs.utep.edu/ofuentes/raoML98.pdf | url-status = dead }}</ref> Transient episodic and declarative memories have distributed representations in LIDA (based on modified version of SDM<ref>Franklin, Stan, et al. "[http://www.brains-minds-media.org/archive/150/bmm-franklin-050704.pdf/?searchterm=franklin The role of consciousness in memory]." Brains, Minds and Media 1.1 (2005): 38.</ref>), there is evidence that this is also the case in the nervous system.<ref>{{cite journal|doi=10.1016/s1364-6613(02)01868-5|pmid=11912039|title=Episodic memory and cortico–hippocampal interactions|journal=Trends in Cognitive Sciences|volume=6|issue=4|pages=162–168|year=2002|last1=Shastri|first1=Lokendra|s2cid=15022802|url=https://www1.icsi.berkeley.edu/~shastri/psfiles/ShastriTicsEM02.pdf}}</ref>
*[[CMatie]] is a [[Artificial consciousness|'conscious']] software agent developed to manage seminar announcements in the Mathematical Sciences Department at the [[University of Memphis]]. It is based on SDM augmented with the use of [[genetic algorithm]]s as an [[Content-addressable memory|associative memory]].<ref>{{cite journal | last1 = Anwar | first1 = Ashraf | last2 = Franklin | first2 = Stan | year = 2003 | title = Sparse distributed memory for 'conscious' software agents | journal = Cognitive Systems Research | volume = 4 | issue = 4| pages = 339–354 | doi=10.1016/s1389-0417(03)00015-9| s2cid = 13380583 }}</ref>
*[[Hierarchical temporal memory]] utilizes SDM for storing sparse distributed representations of the data.
 
===Reinforcement learning===
SDMs provide a linear, local [[function approximation]] scheme, designed to work when a very large/high-dimensional input (address) space has to be mapped into a much smaller [[Computer memory|physical memory]]. In general, local architectures, SDMs included, can be subject to the [[curse of dimensionality]], as some target functions may require, in the worst case, an exponential number of local units to be approximated accurately across the entire input space. However, it is widely believed that most [[Decision support system|decision-making systems]] need high accuracy only around low-dimensional [[manifolds]] of the [[state space]], or important state "highways".<ref>Ratitch, Bohdana, Swaminathan Mahadevan, and [[Doina Precup]]. "Sparse distributed memories in reinforcement learning: Case studies." Proc. of the Workshop on Learning and Planning in Markov Processes-Advances and Challenges. 2004.</ref> The work in Ratitch et al.<ref>Ratitch, Bohdana, and Doina Precup. "[http://www.cs.mcgill.ca/~dprecup/temp/ecml2004.pdf Sparse distributed memories for on-line value-based reinforcement learning] {{Webarchive|url=https://web.archive.org/web/20150824061329/http://www.cs.mcgill.ca/~dprecup/temp/ecml2004.pdf |date=2015-08-24 }}." Machine Learning: ECML 2004. Springer Berlin Heidelberg, 2004. 347-358.</ref> combined the SDM memory model with the ideas from [[instance-based learning|memory-based learning]], which provides an approximator that can dynamically adapt its structure and resolution in order to locate regions of the state space that are "more interesting"<ref>Bouchard-Côté, Alexandre. "[https://www.stat.ubc.ca/~bouchard/pub/report-ml.pdf Sparse Memory Structures Detection]." (2004).</ref> and allocate proportionally more memory resources to model them accurately.
 
===Object indexing in computer vision===
Line 247 ⟶ 249:
 
* Ternary memory space: This enables the memory to be used as a Transient Episodic Memory (TEM) in [[Cognitive architecture|cognitive software agents]]. TEM is a memory with high specificity and low retention, used for events having features of a particular time and place.<ref>D'Mello, Sidney K., Ramamurthy, U., & Franklin, S. 2005. [http://escholarship.org/uc/item/2b78w526.pdf Encoding and Retrieval Efficiency of Episodic Data in a Modified Sparse Distributed Memory System]. In Proceedings of the 27th Annual Meeting of the Cognitive Science Society. Stresa, Ital</ref><ref>Ramamaurthy, U., Sidney K. D'Mello, and Stan Franklin. "[https://www.academia.edu/download/43397052/modifed_20sparse_20Distributed_20Memory_20as_20TSM_20for_20CSA.pdf Modified sparse distributed memory as transient episodic memory for cognitive software agents]{{dead link|date=July 2022|bot=medic}}{{cbignore|bot=medic}}." Systems, Man and Cybernetics, 2004 IEEE International Conference on. Vol. 6. IEEE, 2004.</ref>
* Integer SDM that uses modular arithmetic integer vectors rather than binary vectors. This extension improves the representation capabilities of the memory and is more robust over normalization. It can also be extended to support forgetting and reliable sequence storage.<ref name="integerSDM">Snaider, Javier, and Stan Franklin. "[http://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS12/paper/viewFile/4409/4781 Integer sparse distributed memory] {{Webarchive|url=https://web.archive.org/web/20210802030951/https://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS12/paper/viewFile/4409/4781 |date=2021-08-02 }}." Twenty-fifth international flairs conference. 2012.</ref>
* Using word vectors of larger size than address vectors: This extension preserves many of the desirable properties of the original SDM: auto-associability, content addressability, distributed storage and robustness over noisy inputs. In addition, it adds new functionality, enabling an efficient auto-associative storage of sequences of vectors, as well as of other data structures such as trees.<ref>{{cite journal | last1 = Snaider | first1 = Javier | last2 = Franklin | first2 = Stan | year = 2012 | title = Extended sparse distributed memory and sequence storage | url = https://www.semanticscholar.org/paper/20298cddb815e5bcbc055415c6a62865c076b3b9| journal = Cognitive Computation | volume = 4 | issue = 2| pages = 172–180 | doi=10.1007/s12559-012-9125-8| s2cid = 14319722 }}</ref>
* Constructing SDM from [[Biological neuron model|Spiking Neurons]]: Despite the biological likeness of SDM most of the work undertaken to demonstrate its capabilities to date has used highly artificial neuron models which abstract away the actual behaviour of [[neurons]] in the [[brain]]. Recent work by [[Steve Furber]]'s lab at the [[University of Manchester]]<ref>{{cite journal | last1 = Furber | first1 = Steve B. |display-authors=etal | year = 2004 | title = Sparse distributed memory using N-of-M codes | journal = Neural Networks | volume = 17 | issue = 10| pages = 1437–1451 | doi=10.1016/j.neunet.2004.07.003| pmid = 15541946 }}</ref><ref>Sharp, Thomas: "[https://studentnet.cs.manchester.ac.uk/resources/library/thesis_abstracts/MSc09/FullText/SharpThomas.pdf Application of sparse distributed memory to the Inverted Pendulum Problem]". Diss. University of Manchester, 2009. URL: http://studentnet.cs.manchester.ac.uk/resources/library/thesis_abstracts/MSc09/FullText/SharpThomas.pdf</ref><ref>Bose, Joy. [https://www.academia.edu/download/7385022/bose07_phd.pdf Engineering a Sequence Machine Through Spiking Neurons Employing Rank-order Codes]{{dead link|date=July 2022|bot=medic}}{{cbignore|bot=medic}}. Diss. University of Manchester, 2007.</ref> proposed adaptations to SDM, e.g. by incorporating N-of-M rank codes<ref>Simon Thorpe and Jacques Gautrais. [https://www.researchgate.net/profile/Jacques-Gautrais/publication/285068799_Rank_order_coding_Computational_neuroscience_trends_in_research/links/587ca2e108ae4445c069772a/Rank-order-coding-Computational-neuroscience-trends-in-research.pdf Rank order coding.] In Computational Neuroscience: Trends in research, pages 113–118. Plenum Press, 1998.</ref><ref>{{cite journal | last1 = Furber | first1 = Stephen B. |display-authors=etal | year = 2007 | title = Sparse distributed memory using rank-order neural codes | journal = IEEE Transactions on Neural Networks| volume = 18 | issue = 3| pages = 648–659 | doi=10.1109/tnn.2006.890804| pmid = 17526333 | citeseerx = 10.1.1.686.6196 | s2cid = 14256161 }}</ref> into how [[Neural coding#Population coding|populations of neurons]] may encode information—which may make it possible to build an SDM variant from biologically plausible components. This work has been incorporated into [[SpiNNaker|SpiNNaker (Spiking Neural Network Architecture)]] which is being used as the [[Neuromorphic engineering|Neuromorphic Computing]] Platform for the [[Human Brain Project]].<ref>{{cite journal | last1 = Calimera | first1 = A | last2 = Macii | first2 = E | last3 = Poncino | first3 = M | year = 2013 | title = The Human Brain Project and neuromorphic computing | journal = Functional Neurology | volume = 28 | issue = 3| pages = 191–6 | pmid = 24139655 | pmc=3812737}}</ref>
* Non-random distribution of locations:<ref>{{cite journal | last1 = Hely | first1 = Tim | last2 = Willshaw | first2 = David J. | last3 = Hayes | first3 = Gillian M. | year = 1997 | title = A new approach to Kanerva's sparse distributed memory | url = https://semanticscholar.org/paper/2f55ae4083ca073344badc416b83b00fef0db04f| journal = IEEE Transactions on Neural Networks| volume = 8 | issue = 3| pages = 791–794 | doi=10.1109/72.572115| pmid = 18255679 | s2cid = 18628649 }}</ref><ref>Caraig, Lou Marvin. "[https://arxiv.org/abs/1207.5774 A New Training Algorithm for Kanerva's Sparse Distributed Memory]." arXiv preprint arXiv:1207.5774 (2012).</ref> Although the storage locations are initially distributed randomly in the binary N address space, the final distribution of locations depends upon the input patterns presented, and may be non-random thus allowing better flexibility and [[Generalization error|generalization]]. The data pattern is first stored at locations which lie closest to the input address. The signal (i.e. data pattern) then spreads throughout the memory, and a small percentage of the signal strength (e.g. 5%) is lost at each subsequent ___location encountered. Distributing the signal in this way removes the need for a select read/write radius, one of the problematic features of the original SDM. All locations selected in a write operation do not now receive a copy of the original binary pattern with equal strength. Instead they receive a copy of the pattern weighted with a real value from 1.0->0.05 to store in real valued counters (rather than binary counters in Kanerva's SDM). This rewards the nearest locations with a greater signal strength, and uses the natural architecture of the SDM to attenuate the signal strength. Similarly in reading from the memory, output from the nearest locations is given a greater weight than from more distant locations. The new signal method allows the total signal strength received by a ___location to be used as a measure of the fitness of a ___location and is flexible to varying input (as the loss factor does not have to be changed for input patterns of different lengths).
* SDMSCue (Sparse Distributed Memory for Small Cues): Ashraf Anwar & Stan Franklin at The University of Memphis, introduced a variant of SDM capable of Handling Small Cues; namely SDMSCue in 2002. The key idea is to use multiple Reads/Writes, and space projections to reach a successively longer cue.<ref>{{Cite book|title = A Sparse Distributed Memory Capable of Handling Small Cues, SDMSCue|publisher = Springer US|date = 2005-01-01|isbn = 978-0-387-24048-0|pages = 23–38|series = IFIP — The International Federation for Information Processing|language = en|first1 = Ashraf|last1 = Anwar|first2 = Stan|last2 = Franklin|editor-first = Michael K.|editor-last = Ng|editor-first2 = Andrei|editor-last2 = Doncescu|editor-first3 = Laurence T.|editor-last3 = Yang|editor-first4 = Tau|editor-last4 = Leng|doi = 10.1007/0-387-24049-7_2| s2cid=10290721 }}</ref>
 
==Related patents==
* Method and apparatus for a sparse distributed memory system US 5113507 A, [[Universities Space Research Association]], 1992<ref>Method and apparatus for a sparse distributed memory system US 5113507 A, by Louis A. Jaeckel, Universities Space Research Association, 1992, URL: httphttps://wwwpatents.google.com/patentspatent/US5113507</ref>
* Method and device for storing and recalling information implementing a kanerva memory system US 5829009 A, [[Texas Instruments]], 1998<ref>Method and device for storing and recalling information implementing a kanerva memory system US 5829009 A, by Gary A. Frazier, Texas Instruments Incorporated, 1998, URL: https://wwwpatents.google.com/patentspatent/US5829009</ref>
* Digital memory, Furber, Stephen. US 7512572 B2, 2009<ref>Furber, Stephen B. "Digital memory." U.S. Patent No. 7,512,572. 31 Mar. 2009.___URL: https://wwwpatents.google.com/patentspatent/US7512572</ref>
 
==Implementation==
{{external links|section|date=February 2023}}
* C Binary Vector Symbols (CBVS): includes SDM implementation in [[C (programming language)|C]] as a part of [[vector symbolic architecture]]<ref>{{cite journal|doi=10.1016/j.bica.2014.11.015|title=Vector space architecture for emergent interoperability of systems by learning from demonstration|journal=Biologically Inspired Cognitive Architectures|volume=11|pages=53–64|year=2015|last1=Emruli|first1=Blerim|last2=Sandin|first2=Fredrik|last3=Delsing|first3=Jerker|url=http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-4068 }}</ref> developed by EISLAB at [[Luleå University of Technology]]: http://pendicular.net/cbvs.php {{Webarchive|url=https://web.archive.org/web/20150925123906/http://pendicular.net/cbvs.php |date=2015-09-25 }}<ref>{{cite journal | last1 = Emruli | first1 = Blerim | last2 = Sandin | first2 = Fredrik | year = 2014 | title = Analogical mapping with sparse distributed memory: A simple model that learns to generalize from examples | url = http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-14994| journal = Cognitive Computation | volume = 6 | issue = 1| pages = 74–88 | doi=10.1007/s12559-013-9206-3| s2cid = 12139021 }}</ref>
* CommonSense ToolKit (CSTK) for realtime sensor data processing developed at the [[Lancaster University]] includes implementation of SDM in [[C++]]: httphttps://cstk.sourceforge.net/<ref>Berchtold, Martin. "Processing Sensor Data with the Common Sense Toolkit (CSTK)." *(2005).</ref>
*[[Julia (programming language)|Julia]] implementation by [[Brian Hayes (scientist)|Brian Hayes]]: https://github.com/bit-player/sdm-julia <ref>The Mind Wanders by B. Hayes, 2018. url: http://bit-player.org/2018/the-mind-wanders</ref>
* [[LIDA (cognitive architecture)|Learning Intelligent Distribution Agent (LIDA)]] developed by [[Stan Franklin]]'s lab at the [[University of Memphis]] includes implementation of SDM in [[Java (programming language)|Java]]: http://ccrg.cs.memphis.edu/framework.html
Line 272 ⟶ 275:
 
==See also==
* [[Nearest neighbor search|Approximate nearest neighbor search]]
* [[Autoassociative memory]]
* [[Cerebellar model articulation controller|Associative-memory models of the cerebellum]]
* [[Types of artificial neural networks#Dynamic|Dynamic memory networks]]
* [[Content-addressable memory]]
* [[Types of artificial neural networks#Dynamic memory networks]]
* [[Feedforward neural network]]
* [[Hierarchical temporal memory]]
* [[Holographic associative memory]]
* [[Low-density parity-check code]]
* [[Types of artificial neural networks#Memory networks|Memory networks]]
* [[Locality-sensitive hashing]]
* [[Types of artificial neural networks#Memory networks]]
* [[Memory-prediction framework]]
* [[Neural coding]]
* [[Random-access memory]] (as a special case of SDM)<ref name="psu"/>
* [[Neural Turing machine]]
* [[Random indexing]]
* [[Self-organizing map]]
* [[Semantic folding]]
* [[Semantic folding]]<ref>{{cite arXiv|eprint=1511.08855|title=Semantic Folding Theory And its Application in Semantic Fingerprinting|last=De Sousa Webber|first=Francisco|date=2015|class=cs.AI}}</ref>
* [[Semantic memory]]
* [[Semantic network]]
* Stacked [[autoencoder]]s
* [[Neural coding|Sparse coding]]<ref>Lee, Honglak, et al. "[http://papers.nips.cc/paper/2979-efficient-sparse-coding-algorithms.pdf Efficient sparse coding algorithms]." Advances in neural information processing systems. 2006.</ref>
* [[Visual indexing theory]]
* [[Hierarchical temporal memory#Sparse distributed representations|Sparse distributed representations]]
* [[Neural Turing machine]]<ref>Graves, Alex, Greg Wayne, and Ivo Danihelka. "Neural Turing Machines." arXiv preprint {{arXiv|1410.5401}} (2014).</ref>
* Stacked [[autoencoder]]s<ref>{{cite journal | last1 = Vincent | first1 = Pascal |display-authors=etal | year = 2010 | title = Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion | url = http://www.jmlr.org/papers/volume11/vincent10a/vincent10a.pdf| journal = The Journal of Machine Learning Research | volume = 11 | pages = 3371–3408 }}</ref>
* [[Vector space model]]
* [[Virtual memory]]
 
==References==