Mathematics of neural networks in machine learning: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Removed URL that duplicated unique identifier. | You can use this bot yourself. Report bugs here. | Activated by Zppix | Category:Artificial neural networks‎ | via #UCB_Category
m Backpropagation: changed W1 to lower-case
Line 67:
These weights are computed in turn: first compute <math>w_i</math> using only <math>(x_i, y_i, w_{i-1})</math> for <math>i = 1, \dots, p</math>. The output of the algorithm is then <math>w_p</math>, giving a new function <math>x \mapsto f_N(w_p, x)</math>. The computation is the same in each step, hence only the case <math>i = 1</math> is described.
 
<math>W_1w_1</math> is calculated from <math>(x_1, y_1, w_0)</math> is calculated by considering a variable weight <math>w</math> and applying [[gradient descent]] to the function <math>w\mapsto E(f_N(w, x_1), y_1)</math> to find a local minimum, starting at <math>w = w_0</math>.
 
This makes <math>w_1</math> the minimizing weight found by gradient descent.