Content deleted Content added
m →Ising's exact solution: the Hamiltonian given has free boundaries, no term corresponds to a periodic boundary |
→Neuroscience: a bit more detailed information on Hopfield network |
||
Line 820:
===Neuroscience===
The activity of [[neuron]]s in the brain can be modelled statistically. Each neuron at any time is either active + or inactive −. The active neurons are those that send an [[action potential]] down the axon in any given time window, and the inactive ones are those that do not.
Following the general approach of Jaynes,<ref>{{Citation| author=Jaynes, E. T.| title= Information Theory and Statistical Mechanics | journal= Physical Review| volume = 106 | pages= 620–630 | year= 1957| doi=10.1103/PhysRev.106.620| postscript=.|bibcode = 1957PhRv..106..620J| issue=4 | s2cid= 17870175 }}</ref><ref>{{Citation| author= Jaynes, Edwin T.| title = Information Theory and Statistical Mechanics II |journal = Physical Review |volume =108 | pages = 171–190 | year = 1957| doi= 10.1103/PhysRev.108.171| postscript= .|bibcode = 1957PhRv..108..171J| issue= 2 }}</ref> a later interpretation of Schneidman, Berry, Segev and Bialek,<ref>{{Citation|author1=Elad Schneidman |author2=Michael J. Berry |author3=Ronen Segev |author4=William Bialek | title= Weak pairwise correlations imply strongly correlated network states in a neural population| journal=Nature| volume= 440 | pages= 1007–1012| year=2006| doi= 10.1038/nature04701| pmid= 16625187| issue= 7087| pmc= 1785327| postscript= .|arxiv = q-bio/0512013 |bibcode = 2006Natur.440.1007S |title-link=neural population }}</ref>
Line 832:
===Spin glasses===
With the Ising model the so-called [[spin glasses]] can also be described, by the usual Hamiltonian <math display="inline">H=-\frac{1}{2}\,\sum J_{i,k}\,S_i\,S_k,</math> where the ''S''-variables describe the Ising spins, while the ''J<sub>i,k</sub>'' are taken from a random distribution. For spin glasses a typical distribution chooses antiferromagnetic bonds with probability ''p'' and ferromagnetic bonds with probability 1 − ''p'' (also known as the random-bond Ising model). These bonds stay fixed or "quenched" even in the presence of thermal fluctuations. When ''p'' = 0 we have the original Ising model. This system deserves interest in its own; particularly one has "non-ergodic" properties leading to strange relaxation behaviour. Much attention has been also attracted by the related bond and site dilute Ising model, especially in two dimensions, leading to intriguing critical behavior.<ref>{{Citation|author= J-S Wang, [[Walter Selke|W Selke]], VB Andreichenko, and VS Dotsenko| title= The critical behaviour of the two-dimensional dilute model|journal= Physica A|volume= 164| issue= 2| pages= 221–239 |year= 1990|doi=10.1016/0378-4371(90)90196-Y|bibcode = 1990PhyA..164..221W }}</ref>
=== Artificial neural network ===
{{Main|Hopfield network}}
Ising model was instrumental in the development of the [[Hopfield network]]. The original Ising model is a model for equilibrium. [[Roy J. Glauber]] in 1963 studied the Ising model evolving in time, as a process towards thermal equilibrium ([[Glauber dynamics]]), adding in the component of time.<ref name=":222">{{cite journal |last1=Glauber |first1=Roy J. |date=February 1963 |title=Roy J. Glauber "Time-Dependent Statistics of the Ising Model" |url=https://aip.scitation.org/doi/abs/10.1063/1.1703954 |journal=Journal of Mathematical Physics |volume=4 |issue=2 |pages=294–307 |doi=10.1063/1.1703954 |access-date=2021-03-21}}</ref> (Kaoru Nakano, 1971)<ref name="Nakano1971">{{cite book |last1=Nakano |first1=Kaoru |title=Pattern Recognition and Machine Learning |date=1971 |isbn=978-1-4615-7568-9 |pages=172–186 |chapter=Learning Process in a Model of Associative Memory |doi=10.1007/978-1-4615-7566-5_15}}</ref><ref name="Nakano1972">{{cite journal |last1=Nakano |first1=Kaoru |date=1972 |title=Associatron-A Model of Associative Memory |journal=IEEE Transactions on Systems, Man, and Cybernetics |volume=SMC-2 |issue=3 |pages=380–388 |doi=10.1109/TSMC.1972.4309133}}</ref> and ([[Shun'ichi Amari]], 1972),<ref name="Amari19722">{{cite journal |last1=Amari |first1=Shun-Ichi |date=1972 |title=Learning patterns and pattern sequences by self-organizing nets of threshold elements |journal=IEEE Transactions |volume=C |issue=21 |pages=1197–1206}}</ref> proposed to modify the weights of an Ising model by [[Hebbian theory|Hebbian learning]] rule as a model of associative memory. The same idea was published by ({{ill|William A. Little (physicist)|lt=William A. Little|de|William A. Little}}, 1974),<ref name="little74">{{cite journal |last=Little |first=W. A. |year=1974 |title=The Existence of Persistent States in the Brain |journal=Mathematical Biosciences |volume=19 |issue=1–2 |pages=101–120 |doi=10.1016/0025-5564(74)90031-5}}</ref> who was cited by Hopfield in his 1982 paper.
The [[Spin glass#Sherrington–Kirkpatrick model|Sherrington–Kirkpatrick model]] of spin glass, published in 1975,<ref>{{Cite journal |last1=Sherrington |first1=David |last2=Kirkpatrick |first2=Scott |date=1975-12-29 |title=Solvable Model of a Spin-Glass |url=https://link.aps.org/doi/10.1103/PhysRevLett.35.1792 |journal=Physical Review Letters |volume=35 |issue=26 |pages=1792–1796 |bibcode=1975PhRvL..35.1792S |doi=10.1103/PhysRevLett.35.1792 |issn=0031-9007}}</ref> is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions.<ref name="Hopfield1982">{{cite journal |last1=Hopfield |first1=J. J. |date=1982 |title=Neural networks and physical systems with emergent collective computational abilities |journal=Proceedings of the National Academy of Sciences |volume=79 |issue=8 |pages=2554–2558 |bibcode=1982PNAS...79.2554H |doi=10.1073/pnas.79.8.2554 |pmc=346238 |pmid=6953413 |doi-access=free}}</ref> In a 1984 paper he extended this to continuous activation functions.<ref name=":03">{{cite journal |last1=Hopfield |first1=J. J. |date=1984 |title=Neurons with graded response have collective computational properties like those of two-state neurons |journal=Proceedings of the National Academy of Sciences |volume=81 |issue=10 |pages=3088–3092 |bibcode=1984PNAS...81.3088H |doi=10.1073/pnas.81.10.3088 |pmc=345226 |pmid=6587342 |doi-access=free}}</ref> It became a standard model for the study of neural networks through statistical mechanics.<ref>{{Cite book |last1=Engel |first1=A. |title=Statistical mechanics of learning |last2=Broeck |first2=C. van den |date=2001 |publisher=Cambridge University Press |isbn=978-0-521-77307-2 |___location=Cambridge, UK ; New York, NY}}</ref><ref>{{Cite journal |last1=Seung |first1=H. S. |last2=Sompolinsky |first2=H. |last3=Tishby |first3=N. |date=1992-04-01 |title=Statistical mechanics of learning from examples |url=https://journals.aps.org/pra/abstract/10.1103/PhysRevA.45.6056 |journal=Physical Review A |volume=45 |issue=8 |pages=6056–6091 |bibcode=1992PhRvA..45.6056S |doi=10.1103/PhysRevA.45.6056 |pmid=9907706}}</ref>
===Sea ice===
===Cayley tree topologies and large neural networks===
|