Large memory storage and retrieval neural network: Difference between revisions

Content deleted Content added
No edit summary
Line 2:
{{primary sources|date=August 2019}}
 
A '''Largelarge memory storage and retrieval neural networks''' (LAMSTAR)<ref name="book2013">{{cite book|url={{google books |plainurl=y |id=W6W6CgAAQBAJ&pg=PP1}}|title=Principles of Artificial Neural Networks|last=Graupe|first=Daniel|publisher=World Scientific|year=2013|isbn=978-981-4522-74-8|___location=|pages=1–|ref=harv}}</ref><ref name="GrPatent">{{Patent|US|5920852 A|D. Graupe," Large memory storage and retrieval (LAMSTAR) network, April 1996}}</ref> areis a fast [[deep learning]] [[neural networksnetwork]] of many layers that can use many filters simultaneously. These filters may be nonlinear, stochastic, logic, [[non-stationary]], or even non-analytical. They are biologically motivated and learn continuously.
 
A LAMSTAR neural network may serve as a dynamic neural network in spatial or time domains or both. Its speed is provided by [[Hebbian]] link-weights<ref name="book2013a">D. Graupe, "[https://books.google.com/books?id=W6W6CgAAQBAJ&printsec=frontcover#v=onepage&q&f=false Principles of Artificial Neural Networks].3rd Edition", World Scientific Publishers, 2013, pp. 203–274.</ref> that integrate the various and usually different filters (preprocessing functions) into its many layers and to dynamically rank the significance of the various layers and functions relative to a given learning task. This vaguely imitates biological learning that integrates various preprocessors ([[cochlea]], [[retina]], ''etc.'') and cortexes ([[Auditory cortex|auditory]], [[Visual cortex|visual]], ''etc.'') and their various regions. Its deep learning capability is further enhanced by using inhibition, correlation and by its ability to cope with incomplete data, or "lost" neurons or layers even amidst a task. It is fully transparent due to its link weights. The link-weights allow dynamic determination of innovation and redundancy, and facilitate the ranking of layers, of filters or of individual neurons relative to a task.