Sparse distributed memory: Difference between revisions

Content deleted Content added
See also: limit to existing pages or article sections
Less expert oriented introduction, more general information for an encyclopaedia.
Line 1:
'''Sparse distributed memory''' ('''SDM''') is a mathematical model of human [[long-term memory]] introduced by [[Pentti Kanerva]] in 1988 while he was at [[Ames Research Center|NASA Ames Research Center]]. It is a generalized [[random-access memory]] (RAM) for long (e.g., 1,000 bit) binary words. These words serve as both addresses to and data for the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it, as measured by the number of mismatched bits (i.e., the [[Hamming distance]] between [[memory address]]es).<ref name="book>{{cite" book|last=Kanerva|first=Pentti|title=Sparse Distributed Memory|year=1988|publisher=The MIT Press|isbn=978-0-262-11132-4}}</ref>
 
This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines – e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, etc. Sparse distributed memory is used for storing and retrieving large amounts (<math>2^{1000}</math> [[Bit|bits]]) of information without focusing on the accuracy but on similarity of information.<ref name="book2">{{cite book |last=Kanerva |first=Pentti |title=Sparse Distributed Memory |publisher=The MIT Press |year=1988 |isbn=978-0-262-11132-4}}</ref> There are some recent applications in robot navigation<ref>{{cite book |last1=Mendes |first1=Mateus |title=2008 IEEE International Conference on Robotics and Automation |last2=Crisostomo |first2=Manuel |last3=Coimbra |first3=A. Paulo |year=2008 |isbn=978-1-4244-1646-2 |pages=53–58 |chapter=Robot navigation using a sparse distributed memory |doi=10.1109/ROBOT.2008.4543186 |s2cid=10977460}}</ref> and experience-based robot manipulation.<ref>{{cite book |last1=Jockel |first1=S. |title=2008 IEEE International Conference on Robotics and Biomimetics |last2=Lindner |first2=F. |last3=Jianwei Zhang |year=2009 |isbn=978-1-4244-2678-2 |pages=1298–1303 |chapter=Sparse distributed memory for experience-based robot manipulation |doi=10.1109/ROBIO.2009.4913187 |s2cid=16650992}}</ref>
 
== General principle ==
It is a generalized [[random-access memory]] (RAM) for long (e.g., 1,000 bit) binary words. These words serve as both addresses to and data for the memory. The main attribute of the memory is sensitivity to similarity. This means that a word can be read back not only by giving the original write address but also by giving one close to it, as measured by the number of mismatched bits (i.e., the [[Hamming distance]] between [[memory address]]es).<ref name="book">{{cite book|last=Kanerva|first=Pentti|title=Sparse Distributed Memory|year=1988|publisher=The MIT Press|isbn=978-0-262-11132-4}}</ref>
 
SDM implements transformation from logical space to physical space using distributed data representation and storage, similarly to [[Encoding (memory)|encoding]] processes in human memory.<ref>{{cite journal | last1 = Rissman | first1 = Jesse | last2 = Wagner | first2 = Anthony D. | year = 2012 | title = Distributed representations in memory: insights from functional brain imaging | journal = Annual Review of Psychology | volume = 63 | pages = 101–28 | doi=10.1146/annurev-psych-120710-100344| pmc = 4533899 | pmid=21943171}}</ref> A value corresponding to a logical address is stored into many physical addresses. This way of storing is robust and not deterministic. A memory cell is not addressed directly. If input data (logical addresses) are partially damaged at all, we can still get correct output data.<ref name="greb2"/>