Learning vector quantization: Difference between revisions

Content deleted Content added
m The algorithm was describing an unsupervised method, so I added the use of the labels. My reference is the link given at the end of the algorithm section
m Correcting some mistakes from my last edit.
Line 22:
* the corresponding label <math>c_i</math> to each neuron <math> \vec{w_i} </math>
* how fast the neurons are learning <math> \eta </math>
* and an input list containing<math> tuplesL of</math> vectorscontaining withall labelsthe tovectors of trainwich the neuronslabels <math>are Lknown </math>already (Trainingsset).
 
The algorithm's flow is:
# For next input <math>(\vec{x},c)</math> in <math> L </math> find the neuronsclosest neuron <math>\vec{w_m}</math>, with the same Label and<br>i.e. take the one so that <math>d(\vec{x},\vec{w_m})</math> gets= its\min\limits_i minimum{d(\vec{x},\vec{w_m})} value</math>, where <math>\, d\, </math> is the metric used ( [[Euclidean distance|Euclidean]], etc. ).
# Update <math>\vec{w_m}</math>. A better explanation is get <math>\vec{w_m}</math> closer to the input <math>\vec{x}</math>, if <math>\vec{x}</math> and <math>\vec{w_m}</math> belong to the same label and get them further apart if they don't. <br><math> \vec{w_m} \gets \vec{w_m} + \eta \cdot \left( \vec{x} - \vec{w_m} \right) </math> (closer together) <br> or <math> \vec{w_m} \gets \vec{w_m} - \eta \cdot \left( \vec{x} - \vec{w_m} \right) </math> (further apart).
# While there are vectors left in <math> L </math> go to step 1, else terminate.