Mathematics of neural networks in machine learning: Difference between revisions

Content deleted Content added
No edit summary
Line 14:
:: <math> a_j(t+1) = f(a_j(t), p_j(t), \theta_j), </math>
 
* and an ''output function'' <math>f_\text{out}</math> computing the output from the activation
 
:: <math> o_j(t) = f_\text{out}(a_j(t)). </math>
Line 25:
The ''propagation function'' computes the ''input'' <math>p_j(t)</math> to the neuron <math>j</math> from the outputs <math>o_i(t)</math>and typically has the form<ref name="Zell1994ch5.22">{{Cite book|url=http://worldcat.org/oclc/249017987|title=Simulation neuronaler Netze|last=Zell|first=Andreas|date=2003|publisher=Addison-Wesley|isbn=978-3-89319-554-1|edition=1st|language=German|trans-title=Simulation of Neural Networks|chapter=chapter 5.2|oclc=249017987}}</ref>
 
: <math> p_j(t) = \sum_{i}sum_i o_i(t) w_{ij}. </math>
 
=== Bias ===