Content deleted Content added
Assessment: banner shell, Computing (Rater) |
|||
(37 intermediate revisions by 19 users not shown) | |||
Line 1:
{{WikiProject banner shell|class=
C|1=
{{WikiProject Computing |importance=Low}}
}}
{{Technical|date=September 2010}}
== "Some math" ==
An expression occurring in existential sentences. "For some x" is the same as " exists x." Unlike in everyday language, it is does not necessarily refer to a plurality of elements, and so might be more clearly represented in colloquial English as "for at least one." ([[User:Turkialjrees|Turkialjrees]] ([[User talk:Turkialjrees|talk]]) 16:44, 14 March 2015 (UTC)).
During some of my colleges I got some math what could be nice to be on this page. only I don't have enough mathimatical background to prove the used maths.
== The Math ==
create set of prototypes = <math>w^1,w^2,\dots,w^K, w^k \in \mathbb{R}^n</math>
the data = <math>\xi^1,\xi^2,\dots,\xi^p, \xi^k \in \mathbb{R}^n</math>
by using the [[Euclidean_distance#Squared_Euclidean_Distance|Squared_Euclidean_Distance]]
we can determine the multidimention distance between a prototype and a data point.
<math>L_2(a,b) = \sum_{i=1}^d|a_i-b_i|</math>
Based on this we can find the closest prototype to a given datapoint.
<math>w^{i*} = argmin_j \{d[w^j,\xi]\}</math>
assign <math>\xi</math> to prototype <math>i^*</math>
This way the winner takes it all and the closest prototype should be moved using:
<math> w^{i^*} \xi \to w^{i^*} + \eta( \xi^t - w^{i^*})</math>
where <math>\eta < 1</math> is the learning rate
[[User:Spidfire|Spidfire]] ([[User talk:Spidfire|talk]]) 15:29, 31 January 2013 (UTC)
==clarity?==
Damn. This article made me feel dumb. --[[User:NoPetrol|NoPetrol]] 06:41, 24 Nov 2004 (UTC)
:I have modified the article to give a clear explanation of what vector quantization is, together with some uses for it. It still needs tidying up and referencing [[User:Pogsquog|Pog]] 21:46, 1 August 2007 (UTC)
::also want to see pictures <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/138.246.7.74|138.246.7.74]] ([[User talk:138.246.7.74|talk]]) 13:50, 15 July 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
== Unclear sentence ==
Line 9 ⟶ 40:
What does "<distance-sensitivity>" mean? Does it mean sensitivity? Or does it mean distance minus sensitivity? -[[User:Pgan002|Pgan002]] 00:17, 18 August 2007 (UTC)
I expanded it as distance minus sensitivity. But I think this is not a very good algorithm, and it may have been original research. So I added citation-needed because we need an established algorithm from e.g. some book. <!-- Template:Unsigned IP --><small class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/213.16.80.50|213.16.80.50]] ([[User talk:213.16.80.50#top|talk]]) 14:42, 8 November 2016 (UTC)</small> <!--Autosigned by SineBot-->
== Spam ==
Why the hell is there a picture of an aeroplane on this page? <small>—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Criffer|Criffer]] ([[User talk:Criffer|talk]] • [[Special:Contributions/Criffer|contribs]]) 16:24, 11 October 2007 (UTC)</small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
== Definition ==
Is there a kind of '''agreed''' definition on this term?
At least [http://www.mqasem.net/vectorquantization/vq.html] attempts to define it. Should Wikipedia adopt this definition? Are there alternative definitions somewhere?
[[User:Arkadi kagan|Arkadi kagan]] ([[User talk:Arkadi kagan|talk]]) 21:11, 25 January 2010 (UTC)
:Another option from [http://www.answers.com/topic/vector-quantization]:
:<blockquote>A [[data compression]] technique in which a finite sequence of values is presented as resembling the template (from among the choices available to a given [[codebook]]) that minimizes a [[Distortion (mathematics)|distortion measure]].</blockquote>
:[[User:Arkadi kagan|Arkadi kagan]] ([[User talk:Arkadi kagan|talk]]) 08:38, 28 January 2010 (UTC)
== Use in data compression ==
"All possible combinations of the N-dimensional vector [y1,y2,...,yn] form the Gaurav."
What the hell is a Gaurav?
Secondly, even if there is a correct technical term for all possible combinations of an N-Dimensional vector, it is completely out of context in that particular article. It should be removed, or correct and given a context. <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/198.151.130.16|198.151.130.16]] ([[User talk:198.151.130.16|talk]]) 21:46, 1 April 2011 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
== Where is a block diagram? ==
From the article:
''Block Diagram: A simple vector quantizer is shown below''
Huh? Where is it? [[User:Cuddlyable3|Cuddlyable3]] ([[User talk:Cuddlyable3|talk]]) 09:15, 7 June 2011 (UTC)
== Each cluster the same number of points?! ==
"It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them."
This is not true, isn't it?
E.g. clustering a 1-d normally distributioned data (10k samples) with k-means (6 clusters) results in groups with very different numbers of points assigned to each group (700 to 2400).
I would not call this difference "approximately the same". Or am i missing something?
== VERY approximate ==
From my limited experience, it seems most groups will have similar numbers, but a few groups (clusters) will have very few or very many elements assigned to it. So most clusters (maybe 60~80 %) will have a similar number of elements, but the remainder will have very few or very many elements.
[[User:Hydradix|Hydradix]] ([[User talk:Hydradix|talk]]) 04:53, 13 October 2014 (UTC)
== No mention of LBG or other methods ==
Article's "alternate training" method seems biased towards simulated annealing. No mention is made at all of the Linde–Buzo–Gray algorithm which is a fundamental starting point for most VQ implementations and is the most widely-cited paper in VQ work. No mention is made of PNN (Pair Nearest Neighbor) or other codebook generation methods either.
--[[User:Trixter|Trixter]] ([[User talk:Trixter|talk]]) 19:49, 26 August 2013 (UTC)
'''Agreed!''' The LBG algorithm is fundamental for the topic, Vector Quantization. This, and other code-book generation methods, need to be referenced/linked. Although I have some experience with VQ, I am not an expert in VQ, so am not confident to update the page... [[User:Hydradix|Hydradix]] ([[User talk:Hydradix|talk]]) 07:43, 5 October 2014 (UTC)
=== update ===
I decided to be bold, and added in-page links to LBG and K-Means... I also added LBG to the References.... I tried/wanted to add Enhanced LBG to External References, but when I tired Wikipedia Preview the link would always fail (http://anale-informatica.tibiscus.ro/download/lucrari/2-1-02-balint.pdf) so ELBG was not referenced. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Hydradix|Hydradix]] ([[User talk:Hydradix|talk]] • [[Special:Contributions/Hydradix|contribs]]) 08:34, 5 October 2014 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
== Article is too technical and abstract ==
I have no mathematical background. Despite my interest in signal processing, I didn't understand a word of the lede and used external information to add a sentence for the mortals among us. Once I gain a good understanding of the topic, I will update the article with more understandable information. --[[User:Holzklöppel|Holzklöppel]] ([[User talk:Holzklöppel|talk]]) 09:32, 11 October 2023 (UTC)
|