Sparse distributed memory: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Add: doi-access. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 849/973
GreenC bot (talk | contribs)
Rescued 2 archive links; reformat 2 links. Wayback Medic 2.5
Line 242:
Many extensions and improvements to SDM have been proposed, e.g.:
 
* Ternary memory space: This enables the memory to be used as a Transient Episodic Memory (TEM) in [[Cognitive architecture|cognitive software agents]]. TEM is a memory with high specificity and low retention, used for events having features of a particular time and place.<ref>D'Mello, Sidney K., Ramamurthy, U., & Franklin, S. 2005. [http://escholarship.org/uc/item/2b78w526.pdf Encoding and Retrieval Efficiency of Episodic Data in a Modified Sparse Distributed Memory System]. In Proceedings of the 27th Annual Meeting of the Cognitive Science Society. Stresa, Ital</ref><ref>Ramamaurthy, U., Sidney K. D'Mello, and Stan Franklin. "[https://www.academia.edu/download/43397052/modifed_20sparse_20Distributed_20Memory_20as_20TSM_20for_20CSA.pdf Modified sparse distributed memory as transient episodic memory for cognitive software agents]{{dead link|date=July 2022|bot=medic}}{{cbignore|bot=medic}}." Systems, Man and Cybernetics, 2004 IEEE International Conference on. Vol. 6. IEEE, 2004.</ref>
* Integer SDM that uses modular arithmetic integer vectors rather than binary vectors. This extension improves the representation capabilities of the memory and is more robust over normalization. It can also be extended to support forgetting and reliable sequence storage.<ref name="integerSDM">Snaider, Javier, and Stan Franklin. "[http://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS12/paper/viewFile/4409/4781 Integer sparse distributed memory]." Twenty-fifth international flairs conference. 2012.</ref>
* Using word vectors of larger size than address vectors: This extension preserves many of the desirable properties of the original SDM: auto-associability, content addressability, distributed storage and robustness over noisy inputs. In addition, it adds new functionality, enabling an efficient auto-associative storage of sequences of vectors, as well as of other data structures such as trees.<ref>{{cite journal | last1 = Snaider | first1 = Javier | last2 = Franklin | first2 = Stan | year = 2012 | title = Extended sparse distributed memory and sequence storage | url = https://www.semanticscholar.org/paper/20298cddb815e5bcbc055415c6a62865c076b3b9| journal = Cognitive Computation | volume = 4 | issue = 2| pages = 172–180 | doi=10.1007/s12559-012-9125-8| s2cid = 14319722 }}</ref>
* Constructing SDM from [[Biological neuron model|Spiking Neurons]]: Despite the biological likeness of SDM most of the work undertaken to demonstrate its capabilities to date has used highly artificial neuron models which abstract away the actual behaviour of [[neurons]] in the [[brain]]. Recent work by [[Steve Furber]]'s lab at the [[University of Manchester]]<ref>{{cite journal | last1 = Furber | first1 = Steve B. |display-authors=etal | year = 2004 | title = Sparse distributed memory using N-of-M codes | journal = Neural Networks | volume = 17 | issue = 10| pages = 1437–1451 | doi=10.1016/j.neunet.2004.07.003| pmid = 15541946 }}</ref><ref>Sharp, Thomas: "[https://studentnet.cs.manchester.ac.uk/resources/library/thesis_abstracts/MSc09/FullText/SharpThomas.pdf Application of sparse distributed memory to the Inverted Pendulum Problem]". Diss. University of Manchester, 2009. URL: http://studentnet.cs.manchester.ac.uk/resources/library/thesis_abstracts/MSc09/FullText/SharpThomas.pdf</ref><ref>Bose, Joy. [https://www.academia.edu/download/7385022/bose07_phd.pdf Engineering a Sequence Machine Through Spiking Neurons Employing Rank-order Codes]{{dead link|date=July 2022|bot=medic}}{{cbignore|bot=medic}}. Diss. University of Manchester, 2007.</ref> proposed adaptations to SDM, e.g. by incorporating N-of-M rank codes<ref>Simon Thorpe and Jacques Gautrais. [https://www.researchgate.net/profile/Jacques-Gautrais/publication/285068799_Rank_order_coding_Computational_neuroscience_trends_in_research/links/587ca2e108ae4445c069772a/Rank-order-coding-Computational-neuroscience-trends-in-research.pdf Rank order coding.] In Computational Neuroscience: Trends in research, pages 113–118. Plenum Press, 1998.</ref><ref>{{cite journal | last1 = Furber | first1 = Stephen B. |display-authors=etal | year = 2007 | title = Sparse distributed memory using rank-order neural codes | journal = IEEE Transactions on Neural Networks| volume = 18 | issue = 3| pages = 648–659 | doi=10.1109/tnn.2006.890804| pmid = 17526333 | citeseerx = 10.1.1.686.6196 | s2cid = 14256161 }}</ref> into how [[Neural coding#Population coding|populations of neurons]] may encode information—which may make it possible to build an SDM variant from biologically plausible components. This work has been incorporated into [[SpiNNaker|SpiNNaker (Spiking Neural Network Architecture)]] which is being used as the [[Neuromorphic engineering|Neuromorphic Computing]] Platform for the [[Human Brain Project]].<ref>{{cite journal | last1 = Calimera | first1 = A | last2 = Macii | first2 = E | last3 = Poncino | first3 = M | year = 2013 | title = The Human Brain Project and neuromorphic computing | journal = Functional Neurology | volume = 28 | issue = 3| pages = 191–6 | pmid = 24139655 | pmc=3812737}}</ref>
* Non-random distribution of locations:<ref>{{cite journal | last1 = Hely | first1 = Tim | last2 = Willshaw | first2 = David J. | last3 = Hayes | first3 = Gillian M. | year = 1997 | title = A new approach to Kanerva's sparse distributed memory | url = https://semanticscholar.org/paper/2f55ae4083ca073344badc416b83b00fef0db04f| journal = IEEE Transactions on Neural Networks| volume = 8 | issue = 3| pages = 791–794 | doi=10.1109/72.572115| pmid = 18255679 | s2cid = 18628649 }}</ref><ref>Caraig, Lou Marvin. "[https://arxiv.org/abs/1207.5774 A New Training Algorithm for Kanerva's Sparse Distributed Memory]." arXiv preprint arXiv:1207.5774 (2012).</ref> Although the storage locations are initially distributed randomly in the binary N address space, the final distribution of locations depends upon the input patterns presented, and may be non-random thus allowing better flexibility and [[Generalization error|generalization]]. The data pattern is first stored at locations which lie closest to the input address. The signal (i.e. data pattern) then spreads throughout the memory, and a small percentage of the signal strength (e.g. 5%) is lost at each subsequent ___location encountered. Distributing the signal in this way removes the need for a select read/write radius, one of the problematic features of the original SDM. All locations selected in a write operation do not now receive a copy of the original binary pattern with equal strength. Instead they receive a copy of the pattern weighted with a real value from 1.0->0.05 to store in real valued counters (rather than binary counters in Kanerva's SDM). This rewards the nearest locations with a greater signal strength, and uses the natural architecture of the SDM to attenuate the signal strength. Similarly in reading from the memory, output from the nearest locations is given a greater weight than from more distant locations. The new signal method allows the total signal strength received by a ___location to be used as a measure of the fitness of a ___location and is flexible to varying input (as the loss factor does not have to be changed for input patterns of different lengths).
* SDMSCue (Sparse Distributed Memory for Small Cues): Ashraf Anwar & Stan Franklin at The University of Memphis, introduced a variant of SDM capable of Handling Small Cues; namely SDMSCue in 2002. The key idea is to use multiple Reads/Writes, and space projections to reach a successively longer cue.<ref>{{Cite book|title = A Sparse Distributed Memory Capable of Handling Small Cues, SDMSCue|publisher = Springer US|date = 2005-01-01|isbn = 978-0-387-24048-0|pages = 23–38|series = IFIP — The International Federation for Information Processing|language = en|first1 = Ashraf|last1 = Anwar|first2 = Stan|last2 = Franklin|editor-first = Michael K.|editor-last = Ng|editor-first2 = Andrei|editor-last2 = Doncescu|editor-first3 = Laurence T.|editor-last3 = Yang|editor-first4 = Tau|editor-last4 = Leng|doi = 10.1007/0-387-24049-7_2}}</ref>