Large memory storage and retrieval neural network: Difference between revisions

Content deleted Content added
No edit summary
m Bot: Removing category Category:Artificial neural networks because it's already under Category:Deep learning
 
(8 intermediate revisions by 5 users not shown)
Line 2:
{{primary sources|date=August 2019}}
 
A '''large memory storage and retrieval neural networksnetwork''' (LAMSTAR)<ref name="book2013">{{cite book|url={{google books |plainurl=y |id=W6W6CgAAQBAJ&pg=PP1}}|title=Principles of Artificial Neural Networks |last=Graupe |first=Daniel |publisher=World Scientific|year=2013|isbn=978-981-4522-74-8|___location=|pages=1–|ref=harv}}</ref><ref name="GrPatent">{{Patent|US|5920852 A|D. Graupe," Large memory storage and retrieval (LAMSTAR) network, April 1996}}</ref> is a fast [[deep learning]] [[neural network]] of many layers that can use many filters simultaneously. These filters may be nonlinear, stochastic, logic, [[non-stationary]], or even non-analytical. They are biologically motivated and learn continuously.
 
A LAMSTAR neural network may serve as a dynamic neural network in spatial or time domains or both. Its speed is provided by [[Hebbian]] link-weights<ref name="book2013a">D. {{sfn|Graupe, "[https://books.google.com/books?id=W6W6CgAAQBAJ&printsec=frontcover#v=onepage&q&f=false Principles of Artificial Neural Networks].3rd Edition", World Scientific Publishers, |2013, |pp. =203–274.</ref>}} that integrate the various and usually different filters (preprocessing functions) into its many layers and to dynamically rank the significance of the various layers and functions relative to a given learning task. This vaguely imitates biological learning that integrates various preprocessors ([[cochlea]], [[retina]], ''etc.'') and cortexes ([[Auditory cortex|auditory]], [[Visual cortex|visual]], ''etc.'') and their various regions. Its deep learning capability is further enhanced by using inhibition, correlation and by its ability to cope with incomplete data, or "lost" neurons or layers even amidst a task. It is fully transparent due to its link weights. The link-weights allow dynamic determination of innovation and redundancy, and facilitate the ranking of layers, of filters or of individual neurons relative to a task.
 
LAMSTAR has been applied to many domains, including medical<ref>{{Cite journal|lastlast1=Nigam|firstfirst1=Vivek Prakash|last2=Graupe|first2=Daniel|date=2004-01-01|title=A neural-network-based detection of epilepsy |journal=Neurological Research|volume=26|issue=1|pages=55–60|doi=10.1179/016164104773026534 |issn=0161-6412|pmid=14977058|s2cid=10764633}}</ref><ref name=":11">{{Cite journal|lastlast1=Waxman|firstfirst1=Jonathan A. |last2=Graupe|first2=Daniel|last3=Carley |first3=David W.|date=2010-04-01|title=Automated Prediction of Apnea and Hypopnea, Using a LAMSTAR Artificial Neural Network|journal=American Journal of Respiratory and Critical Care Medicine|volume=181|issue=7|pages=727–733 |doi=10.1164/rccm.200907-1146oc|issn=1073-449X |pmid=20019342}}</ref><ref name="GrGrZh">{{cite journal|last1=Graupe|first1=D.|last2=Graupe|first2=M. H. |last3=Zhong|first3=Y.|last4=Jackson|first4=R. K.|year=2008|title=Blind adaptive filtering for non-invasive extraction of the fetal electrocardiogram and its non-stationarities|url=|journal=Proc. Inst. Mech. Eng. H |volume=222|issue=8|pages=1221–1234|doi=10.1243/09544119jeim417 |pmid=19143416 |s2cid=40744228}}</ref> and financial predictions,<ref name="book2013b">{{harvnbsfn|Graupe|2013|pp=240–253}}</ref> adaptive filtering of noisy speech in unknown noise,<ref name="GrAbon">{{cite journal|last1=Graupe|first1=D.|last2=Abon|first2=J.|year=2002|title=A Neural Network for Blind Adaptive Filtering of Unknown Noise from Speech |url=https://www.tib.eu/en/search/id/BLCP:CN019373941/Blind-Adaptive-Filtering-of-Speech-from-Noise-of/ |journal=Intelligent Engineering Systems Through Artificial Neural Networks |volume=12|issue=|pages=683–688 |accessdateaccess-date=2017-06-14}}</ref> still-image recognition,<ref name="book2013c">{{Cite booksfn|url={{google books |plainurl=y |id=W6W6CgAAQBAJ|page= 253}}|title=Principles of Artificial Neural Networks|last=Graupe|first=Daniel|date=2013|publisher=World Scientific|isbn=9789814522748|pp=253–274|language=en}}</ref> video image recognition,<ref name="Girado">{{cite journal|last1=Girado|first1=J. I. |last2=Sandin|first2=D. J. |last3=DeFanti |first3=T. A.|editor-first1=Nasser M. |editor-first2=Aggelos K. |editor-last1=Nasrabadi |editor-last2=Katsaggelos |year=2003|title=Real-time camera-based face detection using a modified LAMSTAR neural network system|url=|journal=Proc. SPIE 5015, Applications of Artificial Neural Networks in Image Processing VIII |series=Applications of Artificial Neural Networks in Image Processing VIII|volume=5015|issue= |pages=36–46 |bibcode=2003SPIE.5015...36G|doi=10.1117/12.477405|s2cid=15918252}}</ref> software security<ref name="VenkSel">{{cite journal |last1=Venkatachalam|first1=V. |last2=Selvan|first2=S.|year=2007 |title=Intrusion Detection using an Improved Competitive Learning Lamstar Network|url=|journal=International Journal of Computer Science and Network Security |volume=7|issue=2|pages=255–263}}</ref> and adaptive control of non-linear systems.<ref>{{Cite web |url=https://www.researchgate.net/publication/262316982 |title=Control of unstable nonlinear and nonstationary systems using LAMSTAR neural networks|lastlast1=Graupe |firstfirst1=D.|last2=Smollack|first2=M.|date=2007 |website=ResearchGate |publisher=Proceedings of 10th IASTED on Intelligent Control, Sect.592|pages=141–144|access-date=2017-06-14}}</ref> LAMSTAR had a much faster learning speed and somewhat lower error rate than a CNN based on [[ReLU]]-function filters and max pooling, in 20 comparative studies.<ref name="book1016">{{cite book|url={{google books |plainurl=y |id=e5hIDQAAQBAJ |page=57}} |title=Deep Learning Neural Networks: Design and Case Studies|last=Graupe|first=Daniel|date=7 July 2016|publisher=World Scientific Publishing Co Inc|isbn=978-981-314-647-1|___location=|pages=57–110}}</ref>
 
These applications demonstrate delving into aspects of the data that are hidden from shallow learning networks and the human senses, such as in the cases of predicting onset of [[sleep apnea]] events,<ref name=":11" /> of an electrocardiogram of a fetus as recorded from skin-surface electrodes placed on the mother's abdomen early in pregnancy,<ref name="GrGrZh" /> of financial prediction<ref name="book2013" /> or in blind filtering of noisy speech.<ref name="GrAbon" />
 
LAMSTAR was proposed in 1996 and was further developed Graupe and Kordylewski from 1997–2002.<ref>{{Cite book|title=Network based on SOM (Self-Organizing-Map) modules combined with statistical decision tools|lastlast1=Graupe|firstfirst1=D. |last2=Kordylewski|first2=H.|date=August 1996|journaltitle=Proceedings of the 39th Midwest Symposium on Circuits and Systems |chapter=Network based on SOM (Self-Organizing-Map) modules combined with statistical decision tools |date=August 1996|isbn=978-0-7803-3636-0|volume=1|pages=471–474|s2cid=62437626 vol.1|doi=10.1109/mwscas.1996.594203}}</ref><ref>{{Cite journal|lastlast1=Graupe|firstfirst1=D.|last2=Kordylewski |first2=H.|date=1998-03-01|title=A Large Memory Storage and Retrieval Neural Network for Adaptive Retrieval and Diagnosis|journal=International Journal of Software Engineering and Knowledge Engineering|volume=08 |issue=1|pages=115–138|doi=10.1142/s0218194098000091|issn=0218-1940}}</ref><ref name="Kordylew">{{cite journal|last1=Kordylewski|first1=H.|last2=Graupe|first2=D|last3=Liu|first3=K.|year=2001 |title=A novel large-memory neural network as an aid in medical diagnosis applications|url=|journal=IEEE Transactions on Information Technology in Biomedicine|volume=5|issue=3|pages=202–209|doi=10.1109/4233.945291|pmid=11550842 |s2cid=11783734}}</ref> A modified version, known as LAMSTAR 2, was developed by Schneider and Graupe in 2008.<ref name="Schn">{{cite journal|last1=Schneider|first1=N.C.|last2=Graupe|year=2008|title=A modified LAMSTAR neural network and its applications|url=|journal=International Journal of Neural Systems|volume=18 |issue=4|pages=331–337|doi=10.1142/s0129065708001634|pmid=18763732}}</ref><ref name="book2013d">{{harvnbsfn|Graupe|2013|p=217}}</ref>
 
== References ==
{{Reflist}}
 
[[Category:Deep learning]]
== External links ==
 
{{Uncategorized|date=August 2019}}