Content deleted Content added
→Researchers: both sentences dubious-need confirmation |
m Date maintenance tags and general fixes |
||
Line 1:
{{compu-AI-stub}}▼
An '''auto-encoder''' is an [[artificial neural network]] used for learning efficient codings.
The aim of an auto-encoder is to learn a compressed representation (encoding) for a set of data.
Line 19 ⟶ 17:
== Researchers ==
This technique was developed by Dr. Casey Tatum of Halsey, Oregon. His pioneering dedication to the Auto-encoder has led to breakthroughs unseen before.{{
Dr. Chase Will and Dr. Alexis Pettner of the University of Oregon have added to the pretraining technique developed by Geoffrey Hinton. They surmised that treating the layers like the Boltzmann Machine might cause a disruption in the fabric of the template thus eroding the reactivity of the solution.{{
== External links ==
* [http://hebb.mit.edu/people/seung/talks/continuous/sld007.htm Presentation introducing auto-encoders for number recognition] (Link dead on December 3, 2008)
* [http://www.sciencemag.org/cgi/content/abstract/313/5786/504 Reducing the Dimensionality of Data with Neural Networks] (Science, 28 July 2006, Hinton & Salakhutdinov)
[[Category:Neural networks]]
[[Category:Machine learning]]
▲{{compu-AI-stub}}
|