Content deleted Content added
GoldMiner24 (talk | contribs) No edit summary |
Citation bot (talk | contribs) Removed proxy/dead URL that duplicated identifier. Removed access-date with no URL. Removed parameters. | Use this bot. Report bugs. | #UCB_CommandLine |
||
Line 67:
For very brief stimuli, a neuron's maximum firing rate may not be fast enough to produce more than a single spike. Due to the density of information about the abbreviated stimulus contained in this single spike, it would seem that the timing of the spike itself would have to convey more information than simply the average frequency of action potentials over a given period of time. This model is especially important for [[sound localization]], which occurs within the brain on the order of milliseconds. The brain must obtain a large quantity of information based on a relatively short neural response. Additionally, if low firing rates on the order of ten spikes per second must be distinguished from arbitrarily close rate coding for different stimuli, then a neuron trying to discriminate these two stimuli may need to wait for a second or more to accumulate enough information. This is not consistent with numerous organisms which are able to discriminate between stimuli in the time frame of milliseconds, suggesting that a rate code is not the only model at work.<ref name="Theunissen F 1995">{{cite journal | last1 = Theunissen | first1 = F | last2 = Miller | first2 = JP | year = 1995 | title = Temporal Encoding in Nervous Systems: A Rigorous Definition | journal = Journal of Computational Neuroscience | volume = 2 | issue = 2| pages = 149–162 | doi=10.1007/bf00961885| pmid = 8521284 | s2cid = 206786736 }}</ref>
To account for the fast encoding of visual stimuli, it has been suggested that neurons of the retina encode visual information in the latency time between stimulus onset and first action potential, also called latency to first spike or time-to-first-spike.<ref>{{cite journal|last=Gollisch|first=T.|author2=Meister, M.|title=Rapid Neural Coding in the Retina with Relative Spike Latencies|journal=Science|date=22 February 2008|volume=319|issue=5866|pages=1108–1111|doi=10.1126/science.1149639|pmid=18292344|bibcode=2008Sci...319.1108G|s2cid=1032537
The mammalian [[gustatory system]] is useful for studying temporal coding because of its fairly distinct stimuli and the easily discernible responses of the organism.<ref>{{cite journal | last1 = Hallock | first1 = Robert M. | last2 = Di Lorenzo | first2 = Patricia M. | year = 2006 | title = Temporal coding in the gustatory system | doi = 10.1016/j.neubiorev.2006.07.005 | pmid = 16979239 | journal = Neuroscience & Biobehavioral Reviews | volume = 30 | issue = 8| pages = 1145–1160 | s2cid = 14739301 }}</ref> Temporally encoded information may help an organism discriminate between different tastants of the same category (sweet, bitter, sour, salty, umami) that elicit very similar responses in terms of spike count. The temporal component of the pattern elicited by each tastant may be used to determine its identity (e.g., the difference between two bitter tastants, such as quinine and denatonium). In this way, both rate coding and temporal coding may be used in the gustatory system – rate for basic tastant type, temporal for more specific differentiation.<ref name="Carleton A 2010">{{cite journal | last1 = Carleton | first1 = Alan | last2 = Accolla | first2 = Riccardo | last3 = Simon | first3 = Sidney A. | year = 2010 | title = Coding in the mammalian gustatory system | doi = 10.1016/j.tins.2010.04.002 | pmid = 20493563 | journal = Trends in Neurosciences | volume = 33 | issue = 7| pages = 326–334 | pmc = 2902637 }}</ref> Research on mammalian gustatory system has shown that there is an abundance of information present in temporal patterns across populations of neurons, and this information is different from that which is determined by rate coding schemes. Groups of neurons may synchronize in response to a stimulus. In studies dealing with the front cortical portion of the brain in primates, precise patterns with short time scales only a few milliseconds in length were found across small populations of neurons which correlated with certain information processing behaviors. However, little information could be determined from the patterns; one possible theory is they represented the higher-order processing taking place in the brain.<ref name="Zador, Stevens"/>
Line 166:
The codings generated by algorithms implementing a linear generative model can be classified into codings with ''soft sparseness'' and those with ''hard sparseness''.<ref name=Rehn/> These refer to the distribution of basis vector coefficients for typical inputs. A coding with soft sparseness has a smooth [[Normal distribution|Gaussian]]-like distribution, but peakier than Gaussian, with many zero values, some small absolute values, fewer larger absolute values, and very few very large absolute values. Thus, many of the basis vectors are active. Hard sparseness, on the other hand, indicates that there are many zero values, ''no'' or ''hardly any'' small absolute values, fewer larger absolute values, and very few very large absolute values, and thus few of the basis vectors are active. This is appealing from a metabolic perspective: less energy is used when fewer neurons are firing.<ref name=Rehn/>
Another measure of coding is whether it is ''critically complete'' or ''overcomplete''. If the number of basis vectors n is equal to the dimensionality k of the input set, the coding is said to be critically complete. In this case, smooth changes in the input vector result in abrupt changes in the coefficients, and the coding is not able to gracefully handle small scalings, small translations, or noise in the inputs. If, however, the number of basis vectors is larger than the dimensionality of the input set, the coding is ''overcomplete''. Overcomplete codings smoothly interpolate between input vectors and are robust under input noise.<ref name=Olshausen>{{cite journal|first1=Bruno A.|last1=Olshausen|first2=David J.|last2=Field|title=Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1?|journal=Vision Research|year=1997|volume=37|number=23|pages=3311–3325
Other models are based on [[matching pursuit]], a [[sparse approximation]] algorithm which finds the "best matching" projections of multidimensional data, and [[Sparse dictionary learning|dictionary learning]], a representation learning method which aims to find a [[sparse matrix]] representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves.<ref>{{Cite journal|last1=Zhang|first1=Zhifeng|last2=Mallat|first2=Stephane G.|last3=Davis|first3=Geoffrey M.|date=July 1994|title=Adaptive time-frequency decompositions|journal=Optical Engineering|volume=33|issue=7|pages=2183–2192|doi=10.1117/12.173207|issn=1560-2303|bibcode=1994OptEn..33.2183D}}</ref><ref>{{Cite book|last1=Pati|first1=Y. C.|last2=Rezaiifar|first2=R.|last3=Krishnaprasad|first3=P. S.|title=Proceedings of 27th Asilomar Conference on Signals, Systems and Computers |chapter=Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition |date=November 1993|pages=40–44 vol.1|doi=10.1109/ACSSC.1993.342465|isbn=978-0-8186-4120-6|citeseerx=10.1.1.348.5735|s2cid=16513805}}</ref><ref>{{Cite journal|date=2009-05-01|title=CoSaMP: Iterative signal recovery from incomplete and inaccurate samples|journal=Applied and Computational Harmonic Analysis|volume=26|issue=3|pages=301–321|doi=10.1016/j.acha.2008.07.002|issn=1063-5203|last1=Needell|first1=D.|last2=Tropp|first2=J.A.|arxiv=0803.2392|s2cid=1642637 }}</ref>
|