Neural coding: Difference between revisions

Content deleted Content added
Stastr1 (talk | contribs)
Additional content
Tags: nowiki added Visual edit
Stastr1 (talk | contribs)
Additional content
Tags: references removed Visual edit
Line 68:
 
=== Population coding ===
Population coding is a method to represent stimulisignals by using the joint activities of a number of neurons. In population coding, each neuron has a distribution of responses over some set of inputs, and the responses of many neurons may be combined to determine somethe value about the inputs. From the theoretical point of view, population coding is one of a few mathematically well-formulated problems in neuroscience. It grasps the essential features of neural coding and yet is simple enough for theoretic analysis.<ref name="Wu">{{cite journal |vauthors=Wu S, Amari S, Nakahara H |title=Population coding and decoding in a neural field: a computational study |journal=Neural Comput |volume=14 |issue=5 |pages=999–1026 |date=May 2002 |pmid=11972905 |doi=10.1162/089976602753633367 |s2cid=1122223 }}</ref> Experimental studies have revealed that this coding paradigm is widely used in the sensor and motor areas of the brain.
 
For example, in the visual area of visual [[Medial temporal lobe|medial temporal]] lobe (MT), neurons are tuned to the moving direction.<ref name="Maunsell">{{cite journal |vauthors=Maunsell JH, Van Essen DC |title=Functional properties of neurons in middle temporal visual area of the macaque monkey. I. Selectivity for stimulus direction, speed, and orientation |journal=J. Neurophysiol. |volume=49 |issue=5 |pages=1127–47 |date=May 1983 |pmid=6864242 |doi=10.1152/jn.1983.49.5.1127 |s2cid=8708245 |url=https://semanticscholar.org/paper/0bb3df8cfca9f04bc5ad21cd9851603a7a1fb31f }}</ref> InIndividual responseneurons toin ansuch objecta movingpopulation intypically ahave particulardifferent directionbut overlapping selectivities, so that many neurons, inbut MTnot firenecessarily withall, arespond noise-corruptedto anda [[Normalgiven distribution|bellstimulus. Place-shaped]]time activitypopulation patterncodes, acrosstermed the population.averaged-localized-synchronized-response The(ALSR) movingcode, directionhave ofbeen thederived objectfor isneural retrievedrepresentation fromof theauditory populationacoustic activity,stimuli. toThis beexploits immune fromboth the fluctuationplace existingor intuning awithin singlethe neuron'sauditory signal.nerve, Whenas monkeyswell areas trainedthe tophase-locking movewithin aeach joysticknerve towardsfiber aauditory litnerve. target,The afirst singleALSR neuronrepresentation will firewas for multiplesteady-state targetvowels;<ref>{{cite directionsjournal|last1=Sachs|first1=Murray B.|last2=Young|first2=Eric HoweverD.|date=November it1979|title=Representation firesof thesteady-state fastestvowels forin onethe directiontemporal andaspects moreof slowlythe dependingdischarge onpatterns howof closepopulations theof targetauditory-nerve wasfibers|journal=The toJournal of the neuron'sAcoustical "preferred"Society direction.<ref>{{Citeof webAmerica|urlvolume=http://homepage66|issue=5|pages=1381–1403|bibcode=1979ASAJ.psy.utexas.edu66.1381Y|doi=10.1121/homepage/class/psy394U/hayhoe/IntroSensoryMotorSystems/week6/Ch381.pdf383532|titlepmid=Intro to Sensory Motor Systems Ch. 38 page 766500976}}</ref><ref>Science. 1986ALSR Seprepresentations 26;233(4771):1416-9</ref>of Ifpitch each neuronand representsformant movementfrequencies in its preferred directioncomplex, andnon-steady thestate vectorstimuli sumwere oflater alldemonstrated neuronsfor isvoiced-pitch,<ref>{{cite calculatedjournal|last1=Miller|first1=M.I.|last2=Sachs|first2=M.B.|date=June (each1984|title=Representation neuronof hasvoice apitch firingin ratedischarge andpatterns aof preferredauditory-nerve directionfibers|journal=Hearing Research|volume=14|issue=3|pages=257–279|doi=10.1016/0378-5955(84),90054-6|pmid=6480513|s2cid=4704044}}</ref> theand sumformant pointsrepresentations in theconsonant-vowel directionsyllables.<ref>{{cite of motionjournal|last1=Miller|first1=M.I.|last2=Sachs|first2=M.B.|date=1983|title=Representation Inof thisstop manner,consonants in the populationdischarge patterns of neuronsauditory-nerve codesfibrers|journal=The Journal of the signalAcoustical forSociety theof motionAmerica|volume=74|issue=2|pages=502–517|bibcode=1983ASAJ.{{citation needed..74..502M|datedoi=November 201310.1121/1.389816|pmid=6619427}}</ref> This particular population code is referred to as [[population vector]] coding.
 
In general, the population version of the code simply indicates that signal representations are the result of the activity of many neurons. It cannot be called a separate coding model as the question of how individual neurons encode their part of the signal representation remains.
Place-time population codes, termed the averaged-localized-synchronized-response (ALSR) code, have been derived for neural representation of auditory acoustic stimuli. This exploits both the place or tuning within the auditory nerve, as well as the phase-locking within each nerve fiber auditory nerve. The first ALSR representation was for steady-state vowels;<ref>{{cite journal|last1=Sachs|first1=Murray B.|last2=Young|first2=Eric D.|title=Representation of steady-state vowels in the temporal aspects of the discharge patterns of populations of auditory-nerve fibers|journal= The Journal of the Acoustical Society of America|date=November 1979|volume=66|issue=5|pages=1381–1403|doi=10.1121/1.383532|pmid=500976|bibcode=1979ASAJ...66.1381Y}}</ref> ALSR representations of pitch and formant frequencies in complex, non-steady state stimuli were later demonstrated for voiced-pitch,<ref>{{cite journal|last1=Miller|first1=M.I.|last2=Sachs|first2=M.B.|title=Representation of voice pitch in discharge patterns of auditory-nerve fibers|journal=Hearing Research|date=June 1984|volume=14|issue=3|pages=257–279|pmid=6480513|doi=10.1016/0378-5955(84)90054-6|s2cid=4704044}}</ref> and formant representations in consonant-vowel syllables.<ref>{{cite journal|last1=Miller|first1=M.I.|last2=Sachs|first2=M.B.|title=Representation of stop consonants in the discharge patterns of auditory-nerve fibrers|journal= The Journal of the Acoustical Society of America|date=1983|volume=74|issue=2|pages=502–517|doi=10.1121/1.389816|pmid=6619427|bibcode=1983ASAJ...74..502M}}</ref>
The advantage of such representations is that global features such as pitch or formant transition profiles can be represented as global features across the entire nerve simultaneously via both rate and place coding.
 
Some models try to surpass this difficulty by claiming that the individual activity does not contain any information and the meaning should be sought in the combined patterns. In such models, neurons are considered to fire in random order with a Poisson distribution, and such chaos creates order in the form of a population code.<ref>{{Cite journal|last=Freeman|first=Walter J.|date=1992|title=TUTORIAL ON NEUROBIOLOGY: FROM SINGLE NEURONS TO BRAIN CHAOS|url=https://www.worldscientific.com/doi/abs/10.1142/S0218127492000653|journal=International Journal of Bifurcation and Chaos|language=en|volume=02|issue=03|pages=451–482|doi=10.1142/S0218127492000653|issn=0218-1274}}</ref> This hypothesis can be called a reaction to the fact that decades of attempts to decipher the neural code by counting spikes and searching for meaning in the rate or temporal structure of their sequences have not led to a meaningful result.
Population coding has a number of other advantages as well, including reduction of uncertainty due to neuronal [[Statistical variability|variability]] and the ability to represent a number of different stimulus attributes simultaneously. Population coding is also much faster than rate coding and can reflect changes in the stimulus conditions nearly instantaneously.<ref name="Hubel">{{cite journal |vauthors=Hubel DH, Wiesel TN |title=Receptive fields of single neurones in the cat's striate cortex |journal=J. Physiol. |volume=148 |issue= 3|pages=574–91 |date=October 1959 |pmid=14403679 |pmc=1363130 |url=http://www.jphysiol.org/cgi/pmidlookup?view=long&pmid=14403679 |doi=10.1113/jphysiol.1959.sp006308}}</ref> Individual neurons in such a population typically have different but overlapping selectivities, so that many neurons, but not necessarily all, respond to a given stimulus.
 
But such population models do not say anything about the mechanism of operation and the rules of such a code. Moreover, they contradict the reality of neural activity. Subtle measurement methods using implantable electrodes and a detailed study of the temporal structure of the spikes and interspike intervals show that it does not have the character of a Poisson distribution, and each of the stimulus attributes changes not only the absolute number of spikes but also their temporal pattern.<ref>{{Cite journal|last=Victor|first=J. D.|last2=Purpura|first2=K. P.|date=1996|title=Nature and precision of temporal coding in visual cortex: a metric-space analysis|url=https://www.physiology.org/doi/10.1152/jn.1996.76.2.1310|journal=Journal of Neurophysiology|language=en|volume=76|issue=2|pages=1310–1326|doi=10.1152/jn.1996.76.2.1310|issn=0022-3077}}</ref>
Typically an encoding function has a peak value such that activity of the neuron is greatest if the perceptual value is close to the peak value, and becomes reduced accordingly for values less close to the peak value. {{citation needed|date=November 2013}} It follows that the actual perceived value can be reconstructed from the overall pattern of activity in the set of neurons. Vector coding is an example of simple averaging. A more sophisticated mathematical technique for performing such a reconstruction is the method of [[maximum likelihood]] based on a multivariate distribution of the neuronal responses. These models can assume independence, second order correlations,
 
<ref>{{Citation
Despite the enormous variability in neuronal activity, the spike sequences are very accurate. This accuracy is essential for the transmission of information using high-resolution code. Each neuron has its place in forming meanings and specialisation as a filter processing specific signal parameters. However, the question arises of how the patterns of each neuron activity integrate into a general representation of a signal with all parameters and how representations of individual signals merge into a single and coherent model of reality while maintaining their individuality. In neuroscience, this is called a "[[binding problem]]."
| author = Schneidman, E
 
| author2 = Berry, MJ
Some population code models describe this process mathematically as the sum of the vectors of all neurons involved in encoding a given signal. This particular population code is referred to as [[population vector]] coding and is an example of simple averaging. A more sophisticated mathematical technique for performing such a reconstruction is the method of [[maximum likelihood]] based on a multivariate distribution of the neuronal responses.<ref name="Wu">{{cite journal|vauthors=Wu S, Amari S, Nakahara H|date=May 2002|title=Population coding and decoding in a neural field: a computational study|journal=Neural Comput|volume=14|issue=5|pages=999–1026|doi=10.1162/089976602753633367|pmid=11972905|s2cid=1122223}}</ref> These models can assume independence, second order correlations, <ref>{{Citation|author=Schneidman, E|title=Weak Pairwise Correlations Imply Strongly Correlated Network States in a Neural Population|journal=Nature|volume=440|issue=7087|pages=1007–1012|year=2006|arxiv=q-bio/0512013|bibcode=2006Natur.440.1007S|doi=10.1038/nature04701|pmc=1785327|pmid=16625187|author2=Berry, MJ|author3=Segev, R|author4=Bialek, W}}</ref> or even more detailed dependencies such as higher order [[Maximum entropy probability distribution|maximum entropy models]],<ref>{{Citation|author=Amari, SL|title=Information Geometry on Hierarchy of Probability Distributions|journal=IEEE Transactions on Information Theory|volume=47|issue=5|pages=1701–1711|year=2001|citeseerx=10.1.1.46.5226|doi=10.1109/18.930911}}</ref> or [[Copula (statistics)|copulas]].<ref>{{Citation|author=Onken, A|title=Analyzing Short-Term Noise Dependencies of Spike-Counts in Macaque Prefrontal Cortex Using Copulas and the Flashlight Transformation|journal=PLOS Comput Biol|volume=5|issue=11|page=e1000577|year=2009|bibcode=2009PLSCB...5E0577O|doi=10.1371/journal.pcbi.1000577|pmc=2776173|pmid=19956759|author2=Grünewälder, S|author3=Munk, MHJ|author4=Obermayer, K}}</ref>
| author3 = Segev, R
 
| author4 = Bialek, W
However, a common problem with such mathematical models is the lack of an explanation of the physical mechanism that could implement the observed unity of the model of reality created by the brain while preserving the individuality of signal representations.
| year = 2006
| title = Weak Pairwise Correlations Imply Strongly Correlated Network States in a Neural Population
| volume=440
| issue = 7087
| doi=10.1038/nature04701
| journal=Nature
| pages=1007–1012
| pmid=16625187
| pmc=1785327
| arxiv=q-bio/0512013
| bibcode=2006Natur.440.1007S
}}</ref> or even more detailed dependencies such as higher order [[Maximum entropy probability distribution|maximum entropy models]],<ref>{{Citation
| author = Amari, SL
| year = 2001
| title = Information Geometry on Hierarchy of Probability Distributions
| journal = IEEE Transactions on Information Theory
| volume = 47
| issue = 5
| pages = 1701–1711
| citeseerx = 10.1.1.46.5226
| doi = 10.1109/18.930911
}}</ref> or [[Copula (statistics)|copulas]].<ref>{{Citation
| author = Onken, A
| author2 = Grünewälder, S
| author3 = Munk, MHJ
| author4 = Obermayer, K
| year = 2009
| title = Analyzing Short-Term Noise Dependencies of Spike-Counts in Macaque Prefrontal Cortex Using Copulas and the Flashlight Transformation
| journal = PLOS Comput Biol | volume = 5|issue=11|page= e1000577
| doi=10.1371/journal.pcbi.1000577
| pmid=19956759
| pmc=2776173
| bibcode=2009PLSCB...5E0577O}}</ref>
 
====Correlation coding====
The correlation coding model of [[neuron]]al firing claims that correlations between [[action potential]]s, or "spikes", within a spike train may carry additional information above and beyond the simple timing of the spikes. Early work suggested that correlation between spike trains can only reduce, and never increase, the total [[mutual information]] present in the two spike trains about a stimulus feature.<ref>{{cite journal | last1 = Johnson | first1 = KO | date = Jun 1980 | title = Sensory discrimination: neural processes preceding discrimination decision | journal = J Neurophysiol | volume = 43 | issue = 6| pages = 1793–815 | pmid=7411183| doi = 10.1152/jn.1980.43.6.1793 }}</ref> However, this was later demonstrated to be incorrect. Correlation structure can increase information content if noise and signal correlations are of opposite sign.<ref>{{cite journal | last1 = Panzeri | last2 = Schultz | last3 = Treves | last4 = Rolls | year = 1999 | title = Correlations and the encoding of information in the nervous system|pmc=1689940| doi = 10.1098/rspb.1999.0736| journal = Proc Biol Sci | volume = 266 | issue = 1423| pages = 1001–12 | pmid=10610508}}</ref> Correlations can also carry information not present in the average firing rate of two pairs of neurons. A good example of this exists in the pentobarbital-anesthetized marmoset auditory cortex, in which a pure tone causes an increase in the number of correlated spikes, but not an increase in the mean firing rate, of pairs of neurons.<ref>{{cite journal | date = Jun 1996 | title = Primary cortical representation of sounds by the coordination of action-potential timing| journal = Nature | volume = 381 | issue = 6583| pages = 610–3 | doi=10.1038/381610a0 | pmid=8637597 | last1 = Merzenich | first1 = MM| bibcode =1996Natur.381..610D| s2cid = 4258853}}</ref>
 
The idea about correlations between action potentials can be called a movement from the average rate code towards an adequate model, which speaks of the information density of the spatial-temporal patterns of neuronal activity. However, it cannot be called a neural code per se.
 
==== Independent-spike coding ====
Line 125 ⟶ 94:
==== Position coding ====
[[File:PopulationCode.svg|thumb|Plot of typical position coding]]
A typical population code involves neurons with a Gaussian tuning curve whose means vary linearly with the stimulus intensity, meaning that the neuron responds most strongly (in terms of spikes per second) to a stimulus near the mean. The actual intensity could be recovered as the stimulus level corresponding to the mean of the neuron with the greatest response. However, the noise inherent in neural responses means that a maximum likelihood estimation function is more accurate.
 
This type of code is used to encode continuous variables such as joint position, eye position, color, or sound frequency. Any individual neuron is too noisy to faithfully encode the variable using rate coding, but an entire population ensures greater fidelity and precision. For a population of unimodal tuning curves, i.e. with a single peak, the precision typically scales linearly with the number of neurons. Hence, for half the precision, half as many neurons are required. In contrast, when the tuning curves have multiple peaks, as in [[grid cell]]s that represent space, the precision of the population can scale exponentially with the number of neurons. This greatly reduces the number of neurons required for the same precision.<ref name="Mat">{{cite journal |vauthors=Mathis A, Herz AV, Stemmler MB |title=Resolution of nested neuronal representations can be exponential in the number of neurons |journal=Phys. Rev. Lett. |volume=109 |issue=1 |pages=018103 |date=July 2012 |pmid=23031134 |bibcode=2012PhRvL.109a8103M |doi=10.1103/PhysRevLett.109.018103|doi-access=free }}</ref>
[[File:NoisyNeuralResponse.png|thumb|Neural responses are noisy and unreliable.]]
 
This type of code is used to encode continuous variables such as joint position, eye position, color, or sound frequency. Any individual neuron is too noisy to faithfully encode the variable using rate coding, but an entire population ensures greater fidelity and precision. For a population of unimodal tuning curves, i.e. with a single peak, the precision typically scales linearly with the number of neurons. Hence, for half the precision, half as many neurons are required. In contrast, when the tuning curves have multiple peaks, as in [[grid cell]]s that represent space, the precision of the population can scale exponentially with the number of neurons. This greatly reduces the number of neurons required for the same precision.<ref name="Mat">{{cite journal |vauthors=Mathis A, Herz AV, Stemmler MB |title=Resolution of nested neuronal representations can be exponential in the number of neurons |journal=Phys. Rev. Lett. |volume=109 |issue=1 |pages=018103 |date=July 2012 |pmid=23031134 |bibcode=2012PhRvL.109a8103M |doi=10.1103/PhysRevLett.109.018103|doi-access=free }}</ref>
This coding scheme tries to overcome the problems of rate coding model by stating that if any individual neuron is too noisy to faithfully encode the variable using rate coding, an entire population ensures greater fidelity and precision as the maximum likelihood estimation function is more accurate. It remains to answer the question: if individual neurons are too slow to encode the signals, how can the population be fast enough? We are back to the issue of the neural code essence.
 
=== Sparse coding ===
As a consequence,Code sparseness may be focusedrefer onto the temporal sparseness ("a relatively small number of time periods are active") or onto the sparseness in an activated population of neurons. In this latter case, this may be defined in one time period as the number of activated neurons relative to the total number of neurons in the population. For each item to be encoded, this is a different subset of all available neurons. This seems to be a hallmark of neural computations since compared to traditional computers, information is massively distributed across neurons. Sparse coding of natural images produces [[wavelet]]-like oriented filters that resemble the receptive fields of simple cells in the visual cortex.<ref>{{cite journal | last1 = Olshausen | first1 = Bruno A | last2 = Field | first2 = David J | year = 1996 | title = Emergence of simple-cell receptive field properties by learning a sparse code for natural images | url = http://www.cs.ubc.ca/~little/cpsc425/olshausen_field_nature_1996.pdf | journal = Nature | volume = 381 | issue = 6583 | pages = 607–609 | doi = 10.1038/381607a0 | pmid = 8637596 | bibcode = 1996Natur.381..607O | s2cid = 4358477 | access-date = 2016-03-29 | archive-url = https://web.archive.org/web/20151123113216/http://www.cs.ubc.ca/~little/cpsc425/olshausen_field_nature_1996.pdf | archive-date = 2015-11-23 | url-status = dead }}</ref> The capacity of sparse codes may be increased by simultaneous use of temporal coding, as found in the locust olfactory system.<ref>{{cite journal|last1=Gupta|first1=N|last2=Stopfer|first2=M|title=A temporal channel for information in sparse sensory coding.|journal=Current Biology|date=6 October 2014|volume=24|issue=19|pages=2247–56|pmid=25264257|doi=10.1016/j.cub.2014.08.021|pmc=4189991}}</ref>
The sparse code is when each item is encoded by the strong activation of a relatively small set of neurons. For each item to be encoded, this is a different subset of all available neurons. In contrast to sensor-sparse coding, sensor-dense coding implies that all information from possible sensor locations is known.
 
Code sparseness may also refer to a small number of basic patterns used to encode the signals. Given a potentially large set of input patterns, sparse coding algorithms (e.g. [[Autoencoder#Sparse autoencoder|sparse autoencoder]]) attempt to automatically finduse a small number of representative patterns which, when combined in the right proportions, reproduce the original input patterns. The sparse coding for the input then consists of those representative patterns. For example, the very large set of English sentences can be encoded by a small number of symbols (i.e. letters, numbers, punctuation, and spaces) combined in a particular order for a particular sentence, and so a sparse coding for English would be those symbols.
As a consequence, sparseness may be focused on temporal sparseness ("a relatively small number of time periods are active") or on the sparseness in an activated population of neurons. In this latter case, this may be defined in one time period as the number of activated neurons relative to the total number of neurons in the population. This seems to be a hallmark of neural computations since compared to traditional computers, information is massively distributed across neurons. Sparse coding of natural images produces [[wavelet]]-like oriented filters that resemble the receptive fields of simple cells in the visual cortex.<ref>{{cite journal | last1 = Olshausen | first1 = Bruno A | last2 = Field | first2 = David J | year = 1996 | title = Emergence of simple-cell receptive field properties by learning a sparse code for natural images | url = http://www.cs.ubc.ca/~little/cpsc425/olshausen_field_nature_1996.pdf | journal = Nature | volume = 381 | issue = 6583 | pages = 607–609 | doi = 10.1038/381607a0 | pmid = 8637596 | bibcode = 1996Natur.381..607O | s2cid = 4358477 | access-date = 2016-03-29 | archive-url = https://web.archive.org/web/20151123113216/http://www.cs.ubc.ca/~little/cpsc425/olshausen_field_nature_1996.pdf | archive-date = 2015-11-23 | url-status = dead }}</ref> The capacity of sparse codes may be increased by simultaneous use of temporal coding, as found in the locust olfactory system.<ref>{{cite journal|last1=Gupta|first1=N|last2=Stopfer|first2=M|title=A temporal channel for information in sparse sensory coding.|journal=Current Biology|date=6 October 2014|volume=24|issue=19|pages=2247–56|pmid=25264257|doi=10.1016/j.cub.2014.08.021|pmc=4189991}}</ref>
 
==== Mathematical modelling ====
Given a potentially large set of input patterns, sparse coding algorithms (e.g. [[Autoencoder#Sparse autoencoder|sparse autoencoder]]) attempt to automatically find a small number of representative patterns which, when combined in the right proportions, reproduce the original input patterns. The sparse coding for the input then consists of those representative patterns. For example, the very large set of English sentences can be encoded by a small number of symbols (i.e. letters, numbers, punctuation, and spaces) combined in a particular order for a particular sentence, and so a sparse coding for English would be those symbols.
 
==== Linear generative model ====
Most models of sparse coding are based on the linear generative model.<ref name=Rehn>{{cite journal|first1=Martin|last1=Rehn|first2=Friedrich T.|last2=Sommer|title=A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields|journal=Journal of Computational Neuroscience|year=2007|volume=22|issue=2|pages=135–146|doi=10.1007/s10827-006-0003-9|pmid=17053994|s2cid=294586|url=http://redwood.berkeley.edu/fsommer/papers/rehnsommer07jcns.pdf}}</ref> In this model, the symbols are combined in a [[Linear combination|linear fashion]] to approximate the input.
 
Line 148 ⟶ 116:
Other models are based on [[matching pursuit]], a [[sparse approximation]] algorithm which finds the "best matching" projections of multidimensional data, and [[Sparse dictionary learning|dictionary learning]], a representation learning method which aims to find a [[sparse matrix]] representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves.<ref>{{Cite journal|last1=Zhang|first1=Zhifeng|last2=Mallat|first2=Stephane G.|last3=Davis|first3=Geoffrey M.|date=July 1994|title=Adaptive time-frequency decompositions|journal=Optical Engineering|volume=33|issue=7|pages=2183–2192|doi=10.1117/12.173207|issn=1560-2303|bibcode=1994OptEn..33.2183D}}</ref><ref>{{Cite book|last1=Pati|first1=Y. C.|last2=Rezaiifar|first2=R.|last3=Krishnaprasad|first3=P. S.|date=November 1993|title=Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition|journal=Proceedings of 27th Asilomar Conference on Signals, Systems and Computers|pages=40–44 vol.1|doi=10.1109/ACSSC.1993.342465|isbn=978-0-8186-4120-6|citeseerx=10.1.1.348.5735|s2cid=16513805}}</ref><ref>{{Cite journal|date=2009-05-01|title=CoSaMP: Iterative signal recovery from incomplete and inaccurate samples|journal=Applied and Computational Harmonic Analysis|volume=26|issue=3|pages=301–321|doi=10.1016/j.acha.2008.07.002|issn=1063-5203|last1=Needell|first1=D.|last2=Tropp|first2=J.A.|arxiv=0803.2392}}</ref>
 
Overall, despite rigorous mathematical descriptions, the above models stumble when it comes to describing the physical mechanism that can perform such algorithms.
==== Biological evidence ====
[[Sparse coding]] may be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such tasks require implementing stimulus-specific [[associative memory (psychology)|associative memories]] in which only a few neurons out of a [[Neural ensemble|population]] respond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli.
 
==== Biological evidence ====
[[Sparse coding]] may be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such tasks require implementing stimulus-specific [[associative memory (psychology)|associative memories]] in which only a few neurons out of a [[Neural ensemble|population]] respond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli. Theoretical work on [[sparse distributed memory]] has suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations.<ref>Kanerva, Pentti. Sparse distributed memory. MIT press, 1988</ref> Experimentally, sparse representations of sensory information have been observed in many systems, including vision,<ref>{{cite journal | last1 = Vinje | first1 = WE | last2 = Gallant | first2 = JL | year = 2000 | title = Sparse coding and decorrelation in primary visual cortex during natural vision | journal = Science | volume = 287 | issue = 5456| pages = 1273–1276 | pmid = 10678835 | doi=10.1126/science.287.5456.1273| bibcode = 2000Sci...287.1273V | citeseerx = 10.1.1.456.2467 }}</ref> audition,<ref>{{cite journal | last1 = Hromádka | first1 = T | last2 = Deweese | first2 = MR | last3 = Zador | first3 = AM | year = 2008 | title = Sparse representation of sounds in the unanesthetized auditory cortex | journal = PLOS Biol | volume = 6 | issue = 1| page = e16 | pmid = 18232737 | doi=10.1371/journal.pbio.0060016 | pmc=2214813}}</ref> touch,<ref>{{cite journal | last1 = Crochet | first1 = S | last2 = Poulet | first2 = JFA | last3 = Kremer | first3 = Y | last4 = Petersen | first4 = CCH | year = 2011 | title = Synaptic mechanisms underlying sparse coding of active touch | journal = Neuron | volume = 69 | issue = 6| pages = 1160–1175 | pmid = 21435560 | doi=10.1016/j.neuron.2011.02.022| doi-access = free }}</ref> and olfaction.<ref>{{cite journal | last1 = Ito | first1 = I | last2 = Ong | first2 = RCY | last3 = Raman | first3 = B | last4 = Stopfer | first4 = M | year = 2008 | title = Sparse odor representation and olfactory learning | journal = Nat Neurosci | volume = 11 | issue = 10| pages = 1177–1184 | pmid = 18794840 | doi=10.1038/nn.2192 | pmc=3124899}}</ref> HoweverIn the ''[[Drosophila]]'' [[olfactory system]], despitesparse odor coding by the accumulating[[Kenyon evidencecell]]s of the [[Mushroom bodies|mushroom body]] is thought to generate a large number of precisely addressable locations for widespreadthe storage of odor-specific memories.<ref>A sparse codingmemory is a precise memory. Oxford Science blog. 28 Feb 2014. http://www.ox.ac.uk/news/science-blog/sparse-memory-precise-memory</ref> Sparseness is controlled by a negative feedback circuit between Kenyon cells and theoretical[[GABAergic]] argumentsanterior forpaired itslateral importance(APL) neurons. Systematic activation and blockade of each leg of this feedback circuit shows that Kenyon cells activate APL neurons and APL neurons inhibit Kenyon cells. Disrupting the Kenyon cell–APL feedback loop decreases the sparseness of Kenyon cell odor responses, aincreases inter-odor correlations, and prevents flies from learning to discriminate similar, but not dissimilar, odors. These results demonstrationsuggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding improvesand thus the stimulusodor-specificity of associativememories.<ref>Lin, memoryAndrew hasC., beenet difficultal. to"[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4000970/ obtainSparse, decorrelated odor coding in the mushroom body enhances learned odor discrimination]." Nature Neuroscience 17.4 (2014): 559-568.</ref>
 
However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been difficult to obtain.
In the ''[[Drosophila]]'' [[olfactory system]], sparse odor coding by the [[Kenyon cell]]s of the [[Mushroom bodies|mushroom body]] is thought to generate a large number of precisely addressable locations for the storage of odor-specific memories.<ref>A sparse memory is a precise memory. Oxford Science blog. 28 Feb 2014. http://www.ox.ac.uk/news/science-blog/sparse-memory-precise-memory</ref> Sparseness is controlled by a negative feedback circuit between Kenyon cells and [[GABAergic]] anterior paired lateral (APL) neurons. Systematic activation and blockade of each leg of this feedback circuit shows that Kenyon cells activate APL neurons and APL neurons inhibit Kenyon cells. Disrupting the Kenyon cell–APL feedback loop decreases the sparseness of Kenyon cell odor responses, increases inter-odor correlations, and prevents flies from learning to discriminate similar, but not dissimilar, odors. These results suggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor-specificity of memories.<ref>Lin, Andrew C., et al. "[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4000970/ Sparse, decorrelated odor coding in the mushroom body enhances learned odor discrimination]." Nature Neuroscience 17.4 (2014): 559-568.</ref>
 
== See also ==