Content deleted Content added
Additional content Tags: nowiki added Visual edit |
Additional content Tags: references removed Visual edit |
||
Line 68:
=== Population coding ===
Population coding is a method to represent
For example, in the
In general, the population version of the code simply indicates that signal representations are the result of the activity of many neurons. It cannot be called a separate coding model as the question of how individual neurons encode their part of the signal representation remains.
Some models try to surpass this difficulty by claiming that the individual activity does not contain any information and the meaning should be sought in the combined patterns. In such models, neurons are considered to fire in random order with a Poisson distribution, and such chaos creates order in the form of a population code.<ref>{{Cite journal|last=Freeman|first=Walter J.|date=1992|title=TUTORIAL ON NEUROBIOLOGY: FROM SINGLE NEURONS TO BRAIN CHAOS|url=https://www.worldscientific.com/doi/abs/10.1142/S0218127492000653|journal=International Journal of Bifurcation and Chaos|language=en|volume=02|issue=03|pages=451–482|doi=10.1142/S0218127492000653|issn=0218-1274}}</ref> This hypothesis can be called a reaction to the fact that decades of attempts to decipher the neural code by counting spikes and searching for meaning in the rate or temporal structure of their sequences have not led to a meaningful result.
But such population models do not say anything about the mechanism of operation and the rules of such a code. Moreover, they contradict the reality of neural activity. Subtle measurement methods using implantable electrodes and a detailed study of the temporal structure of the spikes and interspike intervals show that it does not have the character of a Poisson distribution, and each of the stimulus attributes changes not only the absolute number of spikes but also their temporal pattern.<ref>{{Cite journal|last=Victor|first=J. D.|last2=Purpura|first2=K. P.|date=1996|title=Nature and precision of temporal coding in visual cortex: a metric-space analysis|url=https://www.physiology.org/doi/10.1152/jn.1996.76.2.1310|journal=Journal of Neurophysiology|language=en|volume=76|issue=2|pages=1310–1326|doi=10.1152/jn.1996.76.2.1310|issn=0022-3077}}</ref>
Despite the enormous variability in neuronal activity, the spike sequences are very accurate. This accuracy is essential for the transmission of information using high-resolution code. Each neuron has its place in forming meanings and specialisation as a filter processing specific signal parameters. However, the question arises of how the patterns of each neuron activity integrate into a general representation of a signal with all parameters and how representations of individual signals merge into a single and coherent model of reality while maintaining their individuality. In neuroscience, this is called a "[[binding problem]]."
Some population code models describe this process mathematically as the sum of the vectors of all neurons involved in encoding a given signal. This particular population code is referred to as [[population vector]] coding and is an example of simple averaging. A more sophisticated mathematical technique for performing such a reconstruction is the method of [[maximum likelihood]] based on a multivariate distribution of the neuronal responses.<ref name="Wu">{{cite journal|vauthors=Wu S, Amari S, Nakahara H|date=May 2002|title=Population coding and decoding in a neural field: a computational study|journal=Neural Comput|volume=14|issue=5|pages=999–1026|doi=10.1162/089976602753633367|pmid=11972905|s2cid=1122223}}</ref> These models can assume independence, second order correlations, <ref>{{Citation|author=Schneidman, E|title=Weak Pairwise Correlations Imply Strongly Correlated Network States in a Neural Population|journal=Nature|volume=440|issue=7087|pages=1007–1012|year=2006|arxiv=q-bio/0512013|bibcode=2006Natur.440.1007S|doi=10.1038/nature04701|pmc=1785327|pmid=16625187|author2=Berry, MJ|author3=Segev, R|author4=Bialek, W}}</ref> or even more detailed dependencies such as higher order [[Maximum entropy probability distribution|maximum entropy models]],<ref>{{Citation|author=Amari, SL|title=Information Geometry on Hierarchy of Probability Distributions|journal=IEEE Transactions on Information Theory|volume=47|issue=5|pages=1701–1711|year=2001|citeseerx=10.1.1.46.5226|doi=10.1109/18.930911}}</ref> or [[Copula (statistics)|copulas]].<ref>{{Citation|author=Onken, A|title=Analyzing Short-Term Noise Dependencies of Spike-Counts in Macaque Prefrontal Cortex Using Copulas and the Flashlight Transformation|journal=PLOS Comput Biol|volume=5|issue=11|page=e1000577|year=2009|bibcode=2009PLSCB...5E0577O|doi=10.1371/journal.pcbi.1000577|pmc=2776173|pmid=19956759|author2=Grünewälder, S|author3=Munk, MHJ|author4=Obermayer, K}}</ref>
However, a common problem with such mathematical models is the lack of an explanation of the physical mechanism that could implement the observed unity of the model of reality created by the brain while preserving the individuality of signal representations.
====Correlation coding====
The correlation coding model of [[neuron]]al firing claims that correlations between [[action potential]]s, or "spikes", within a spike train may carry additional information above and beyond the simple timing of the spikes. Early work suggested that correlation between spike trains can only reduce, and never increase, the total [[mutual information]] present in the two spike trains about a stimulus feature.<ref>{{cite journal | last1 = Johnson | first1 = KO | date = Jun 1980 | title = Sensory discrimination: neural processes preceding discrimination decision | journal = J Neurophysiol | volume = 43 | issue = 6| pages = 1793–815 | pmid=7411183| doi = 10.1152/jn.1980.43.6.1793 }}</ref> However, this was later demonstrated to be incorrect. Correlation structure can increase information content if noise and signal correlations are of opposite sign.<ref>{{cite journal | last1 = Panzeri | last2 = Schultz | last3 = Treves | last4 = Rolls | year = 1999 | title = Correlations and the encoding of information in the nervous system|pmc=1689940| doi = 10.1098/rspb.1999.0736| journal = Proc Biol Sci | volume = 266 | issue = 1423| pages = 1001–12 | pmid=10610508}}</ref> Correlations can also carry information not present in the average firing rate of two pairs of neurons. A good example of this exists in the pentobarbital-anesthetized marmoset auditory cortex, in which a pure tone causes an increase in the number of correlated spikes, but not an increase in the mean firing rate, of pairs of neurons.<ref>{{cite journal | date = Jun 1996 | title = Primary cortical representation of sounds by the coordination of action-potential timing| journal = Nature | volume = 381 | issue = 6583| pages = 610–3 | doi=10.1038/381610a0 | pmid=8637597 | last1 = Merzenich | first1 = MM| bibcode =1996Natur.381..610D| s2cid = 4258853}}</ref>
The idea about correlations between action potentials can be called a movement from the average rate code towards an adequate model, which speaks of the information density of the spatial-temporal patterns of neuronal activity. However, it cannot be called a neural code per se.
==== Independent-spike coding ====
Line 125 ⟶ 94:
==== Position coding ====
[[File:PopulationCode.svg|thumb|Plot of typical position coding]]
A typical population code involves neurons with a Gaussian tuning curve whose means vary linearly with the stimulus intensity, meaning that the neuron responds most strongly (in terms of spikes per second) to a stimulus near the mean. The actual intensity could be recovered as the stimulus level corresponding to the mean of the neuron with the greatest response
▲This type of code is used to encode continuous variables such as joint position, eye position, color, or sound frequency. Any individual neuron is too noisy to faithfully encode the variable using rate coding, but an entire population ensures greater fidelity and precision. For a population of unimodal tuning curves, i.e. with a single peak, the precision typically scales linearly with the number of neurons. Hence, for half the precision, half as many neurons are required. In contrast, when the tuning curves have multiple peaks, as in [[grid cell]]s that represent space, the precision of the population can scale exponentially with the number of neurons. This greatly reduces the number of neurons required for the same precision.<ref name="Mat">{{cite journal |vauthors=Mathis A, Herz AV, Stemmler MB |title=Resolution of nested neuronal representations can be exponential in the number of neurons |journal=Phys. Rev. Lett. |volume=109 |issue=1 |pages=018103 |date=July 2012 |pmid=23031134 |bibcode=2012PhRvL.109a8103M |doi=10.1103/PhysRevLett.109.018103|doi-access=free }}</ref>
This coding scheme tries to overcome the problems of rate coding model by stating that if any individual neuron is too noisy to faithfully encode the variable using rate coding, an entire population ensures greater fidelity and precision as the maximum likelihood estimation function is more accurate. It remains to answer the question: if individual neurons are too slow to encode the signals, how can the population be fast enough? We are back to the issue of the neural code essence.
=== Sparse coding ===
Code sparseness may also refer to a small number of basic patterns used to encode the signals. Given a potentially large set of input patterns, sparse coding algorithms (e.g. [[Autoencoder#Sparse autoencoder|sparse autoencoder]])
▲As a consequence, sparseness may be focused on temporal sparseness ("a relatively small number of time periods are active") or on the sparseness in an activated population of neurons. In this latter case, this may be defined in one time period as the number of activated neurons relative to the total number of neurons in the population. This seems to be a hallmark of neural computations since compared to traditional computers, information is massively distributed across neurons. Sparse coding of natural images produces [[wavelet]]-like oriented filters that resemble the receptive fields of simple cells in the visual cortex.<ref>{{cite journal | last1 = Olshausen | first1 = Bruno A | last2 = Field | first2 = David J | year = 1996 | title = Emergence of simple-cell receptive field properties by learning a sparse code for natural images | url = http://www.cs.ubc.ca/~little/cpsc425/olshausen_field_nature_1996.pdf | journal = Nature | volume = 381 | issue = 6583 | pages = 607–609 | doi = 10.1038/381607a0 | pmid = 8637596 | bibcode = 1996Natur.381..607O | s2cid = 4358477 | access-date = 2016-03-29 | archive-url = https://web.archive.org/web/20151123113216/http://www.cs.ubc.ca/~little/cpsc425/olshausen_field_nature_1996.pdf | archive-date = 2015-11-23 | url-status = dead }}</ref> The capacity of sparse codes may be increased by simultaneous use of temporal coding, as found in the locust olfactory system.<ref>{{cite journal|last1=Gupta|first1=N|last2=Stopfer|first2=M|title=A temporal channel for information in sparse sensory coding.|journal=Current Biology|date=6 October 2014|volume=24|issue=19|pages=2247–56|pmid=25264257|doi=10.1016/j.cub.2014.08.021|pmc=4189991}}</ref>
==== Mathematical modelling ====
▲Given a potentially large set of input patterns, sparse coding algorithms (e.g. [[Autoencoder#Sparse autoencoder|sparse autoencoder]]) attempt to automatically find a small number of representative patterns which, when combined in the right proportions, reproduce the original input patterns. The sparse coding for the input then consists of those representative patterns. For example, the very large set of English sentences can be encoded by a small number of symbols (i.e. letters, numbers, punctuation, and spaces) combined in a particular order for a particular sentence, and so a sparse coding for English would be those symbols.
Most models of sparse coding are based on the linear generative model.<ref name=Rehn>{{cite journal|first1=Martin|last1=Rehn|first2=Friedrich T.|last2=Sommer|title=A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields|journal=Journal of Computational Neuroscience|year=2007|volume=22|issue=2|pages=135–146|doi=10.1007/s10827-006-0003-9|pmid=17053994|s2cid=294586|url=http://redwood.berkeley.edu/fsommer/papers/rehnsommer07jcns.pdf}}</ref> In this model, the symbols are combined in a [[Linear combination|linear fashion]] to approximate the input.
Line 148 ⟶ 116:
Other models are based on [[matching pursuit]], a [[sparse approximation]] algorithm which finds the "best matching" projections of multidimensional data, and [[Sparse dictionary learning|dictionary learning]], a representation learning method which aims to find a [[sparse matrix]] representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves.<ref>{{Cite journal|last1=Zhang|first1=Zhifeng|last2=Mallat|first2=Stephane G.|last3=Davis|first3=Geoffrey M.|date=July 1994|title=Adaptive time-frequency decompositions|journal=Optical Engineering|volume=33|issue=7|pages=2183–2192|doi=10.1117/12.173207|issn=1560-2303|bibcode=1994OptEn..33.2183D}}</ref><ref>{{Cite book|last1=Pati|first1=Y. C.|last2=Rezaiifar|first2=R.|last3=Krishnaprasad|first3=P. S.|date=November 1993|title=Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition|journal=Proceedings of 27th Asilomar Conference on Signals, Systems and Computers|pages=40–44 vol.1|doi=10.1109/ACSSC.1993.342465|isbn=978-0-8186-4120-6|citeseerx=10.1.1.348.5735|s2cid=16513805}}</ref><ref>{{Cite journal|date=2009-05-01|title=CoSaMP: Iterative signal recovery from incomplete and inaccurate samples|journal=Applied and Computational Harmonic Analysis|volume=26|issue=3|pages=301–321|doi=10.1016/j.acha.2008.07.002|issn=1063-5203|last1=Needell|first1=D.|last2=Tropp|first2=J.A.|arxiv=0803.2392}}</ref>
Overall, despite rigorous mathematical descriptions, the above models stumble when it comes to describing the physical mechanism that can perform such algorithms.
==== Biological evidence ====▼
▲==== Biological evidence ====
[[Sparse coding]] may be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such tasks require implementing stimulus-specific [[associative memory (psychology)|associative memories]] in which only a few neurons out of a [[Neural ensemble|population]] respond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli. Theoretical work on [[sparse distributed memory]] has suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations.<ref>Kanerva, Pentti. Sparse distributed memory. MIT press, 1988</ref> Experimentally, sparse representations of sensory information have been observed in many systems, including vision,<ref>{{cite journal | last1 = Vinje | first1 = WE | last2 = Gallant | first2 = JL | year = 2000 | title = Sparse coding and decorrelation in primary visual cortex during natural vision | journal = Science | volume = 287 | issue = 5456| pages = 1273–1276 | pmid = 10678835 | doi=10.1126/science.287.5456.1273| bibcode = 2000Sci...287.1273V | citeseerx = 10.1.1.456.2467 }}</ref> audition,<ref>{{cite journal | last1 = Hromádka | first1 = T | last2 = Deweese | first2 = MR | last3 = Zador | first3 = AM | year = 2008 | title = Sparse representation of sounds in the unanesthetized auditory cortex | journal = PLOS Biol | volume = 6 | issue = 1| page = e16 | pmid = 18232737 | doi=10.1371/journal.pbio.0060016 | pmc=2214813}}</ref> touch,<ref>{{cite journal | last1 = Crochet | first1 = S | last2 = Poulet | first2 = JFA | last3 = Kremer | first3 = Y | last4 = Petersen | first4 = CCH | year = 2011 | title = Synaptic mechanisms underlying sparse coding of active touch | journal = Neuron | volume = 69 | issue = 6| pages = 1160–1175 | pmid = 21435560 | doi=10.1016/j.neuron.2011.02.022| doi-access = free }}</ref> and olfaction.<ref>{{cite journal | last1 = Ito | first1 = I | last2 = Ong | first2 = RCY | last3 = Raman | first3 = B | last4 = Stopfer | first4 = M | year = 2008 | title = Sparse odor representation and olfactory learning | journal = Nat Neurosci | volume = 11 | issue = 10| pages = 1177–1184 | pmid = 18794840 | doi=10.1038/nn.2192 | pmc=3124899}}</ref>
However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been difficult to obtain.
== See also ==
|