Content deleted Content added
(25 intermediate revisions by 17 users not shown) | |||
Line 1:
{{WikiProject
{{WikiProject
{{WikiProject Computer science|importance=high}}
{{WikiProject Computing|importance=high}}
}}
==Proposal: Interpretability==
Probably would be beneficial to include some details, especially wrt SAEs, on the relationship between autoencoders and neural network interpretability.[[User:madeleinesinging]] <!--Template:Undated--><small class="autosigned">— Preceding [[Wikipedia:Signatures|undated]] comment added 06:31, 8 June 2025 (UTC)</small> <!--Autosigned by SineBot-->
==Training section==
Line 12 ⟶ 17:
==Answer: The outputs are the same as the inputs, i.e. y_i = x_i. The autoencoder tries to learn the identity function. Although it might seem that if the number of hidden units >= the number of input units (/output units) the resulting weights would be the trivial identity, in practice this does not turn out to be the case (probably due to the fact that the weights start so small). Sparse autoencoders, where a limited number of hidden units can be activated at once, avoid this problem even in theory. [[Special:Contributions/216.169.216.1|216.169.216.1]] ([[User talk:216.169.216.1|talk]]) 16:47, 17 September 2013 (UTC) Dave Rimshnick
==Where is the structure section taken from?
I was wondering if there were any book sources that could be added as reference where a similar approach in describing the autoencoder is taken. Also, what are the W and b terms? It's not very clear what role W and b play on the decoding and enconding process.
Hi there, for anyone struggling to get the correct scientific quote for Autoencoder and where the argmin stuff can be found, the source you're looking for "Threaded Ensembles of Supervised and Unsupervised Neural Networks for Stream Learning" & anyone who unlike me cares enough could add that quote to the article, glhf <!-- Template:Unsigned IP --><small class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/2003:EB:6724:3F08:B8F2:3F33:F768:C858|2003:EB:6724:3F08:B8F2:3F33:F768:C858]] ([[User talk:2003:EB:6724:3F08:B8F2:3F33:F768:C858#top|talk]]) 16:46, 9 November 2019 (UTC)</small> <!--Autosigned by SineBot-->
== Split proposed ==
I think it would make sense to split out the "variational autoencoder" section, given that they are generative models and their purpose differs significantly from classic autoencoders. Thoughts? [[User:Skjn|Skjn]] ([[User talk:Skjn|talk]]) 15:48, 19 May 2020 (UTC)
:I feel like the current content in that section is already too [[WP:TEXTBOOK]] and if anything should be trimmed or gutted, rather than expanded into its own article. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 04:54, 21 May 2020 (UTC)
:I second this proposal. The sheer magnitude of variational autoencoder based methods that have been developed in the past two years is immense. Definitely worth an independent article. <!-- Template:Unsigned --><small class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Parthzoozoo|Parthzoozoo]] ([[User talk:Parthzoozoo#top|talk]] • [[Special:Contributions/Parthzoozoo|contribs]]) 17:22, 18 June 2020 (UTC)</small> <!--Autosigned by SineBot-->
: Also agree with the proposal - they really are a quite different concept, as asserted in the text <!-- Template:Unsigned IP --><small class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/193.129.26.79|193.129.26.79]] ([[User talk:193.129.26.79#top|talk]]) 15:18, 17 August 2020 (UTC)</small> <!--Autosigned by SineBot-->
: I agree, they use variational inference which is very different from standard autoencoders. <!-- Template:Unsigned IP --><small class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/62.226.49.10|62.226.49.10]] ([[User talk:62.226.49.10#top|talk]]) 22:34, 30 August 2020 (UTC)</small> <!--Autosigned by SineBot-->
== Autoencoders variational equation ==
The equation that yields the parametrization of the auto encoder and its conjugate auto decoder appears to be wrong. The minimum extends over all x in X and all sampled parametrizations of phi and psi and the "arg" that realizes the minimum yields the optimized phi and psi parametrization.[[User:RutiWinkler|RutiWinkler]] ([[User talk:RutiWinkler|talk]]) 14:37, 3 December 2021 (UTC)
== India Education Program course assignment ==
[[File:Wikipedia-Ambassador-Program-Logo.png|50px]] This article was the subject of an educational assignment at College Of Engineering Pune supported by [[Wikipedia:Education program/Ambassadors|Wikipedia Ambassadors]] through the [[Wikipedia:India Education Program|India Education Program]] during the 2011 Q3 term. Further details are available [[Wikipedia:India Education Program/Courses/Fall 2011/Artificial Intelligence|on the course page]].[[Category:India Education Program student projects, 2011 Q3|{{PAGENAME}}]] [[Category:India Education Program student projects]]
{{small|The above message was substituted from {{tlc|IEP assignment}} by [[User:PrimeBOT|PrimeBOT]] ([[User talk:PrimeBOT|talk]]) on 20:09, 1 February 2023 (UTC)}}
== Convolutional Auto-Encoders (CAE) ==
Under subtopic of 'variations' we are missing another CAE: Convolutional Auto-Encoder (CAE) [[User:Mojtaba Mohammadi|Mojtaba Mohammadi]] ([[User talk:Mojtaba Mohammadi|talk]]) 23:23, 7 April 2025 (UTC)
|