Content deleted Content added
m →Software: Added license (source: https://github.com/uniker9/JAutoDiff/blob/master/LICENSE.txt) |
Mathpriest (talk | contribs) →Reverse accumulation: References: Reverse mode AD first published in 1970 by Linnainmaa |
||
Line 139:
Reverse accumulation is more efficient than forward accumulation for functions {{math|''f'' : ℝ<sup>''n''</sup> → ℝ<sup>''m''</sup>}} with {{math|''m'' ≪ ''n''}} as only {{math|''m''}} sweeps are necessary, compared to {{math|''n''}} sweeps for forward accumulation.
Reverse mode AD was first published in 1970 by [[Seppo Linnainmaa]] in his master thesis.<ref name="lin1970">Linnainmaa, S. (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6-7.</ref><ref name="lin1976">Linnainmaa, S. (1976). Taylor expansion of the accumulated rounding error. BIT Numerical Mathematics, 16(2), 146-160.</ref><ref name="grie2012">Griewank, A. (2012). Who Invented the Reverse Mode of Differentiation?. Optimization Stories, Documenta Matematica, Extra Volume ISMP (2012), 389-400.</ref>
[[Backpropagation]] of errors in multilayer perceptrons, a technique used in [[machine learning]], is a special case of reverse mode AD.
|