Automatic differentiation: Difference between revisions

Content deleted Content added
Reverse accumulation: References: Reverse mode AD first published in 1970 by Linnainmaa
Fixed typo
Tags: Mobile edit Mobile web edit
 
(326 intermediate revisions by more than 100 users not shown)
Line 1:
{{Short description|Numerical calculations carrying along derivatives}}
In [[mathematics]] and [[computer algebra]], '''automatic differentiation''' ('''AD'''), also called '''algorithmic differentiation''' or '''computational differentiation''',<ref>{{cite journal|last=Neidinger|first=Richard D.|title=Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming|journal=SIAM Review|year=2010|volume=52|issue=3|pages=545–563|url=http://academics.davidson.edu/math/neidinger/SIAMRev74362.pdf|doi=10.1137/080743627}}</ref><ref>http://www.ec-securehost.com/SIAM/SE24.html</ref> is a set of techniques to numerically evaluate the [[derivative]] of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the [[chain rule]] repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
In [[mathematics]] and [[computer algebra]], '''automatic differentiation''' ('''auto-differentiation''', '''autodiff''', or '''AD'''), also called '''algorithmic differentiation''', '''computational differentiation''', and '''differentiation arithmetic'''<ref>{{cite journal|last=Neidinger|first=Richard D.|title=Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming|journal=SIAM Review| year=2010| volume=52| issue=3| pages=545–563| url=http://academics.davidson.edu/math/neidinger/SIAMRev74362.pdf|doi=10.1137/080743627| citeseerx=10.1.1.362.6580|s2cid=17134969 }}</ref><ref name="baydin2018automatic">{{cite journal|last1=Baydin|first1=Atilim Gunes|last2=Pearlmutter| first2=Barak|last3=Radul|first3=Alexey Andreyevich|last4=Siskind|first4=Jeffrey|title=Automatic differentiation in machine learning: a survey| journal=Journal of Machine Learning Research|year=2018|volume=18|pages=1–43|url=http://jmlr.org/papers/v18/17-468.html}}</ref><ref name="Dawood.Megahed.2023">[[Hend Dawood]] and [[Nefertiti Megahed]] (2023). Automatic differentiation of uncertainties: an interval computational differentiation for first and higher derivatives with implementation. PeerJ Computer Science 9:e1301 https://doi.org/10.7717/peerj-cs.1301.</ref><ref name="Dawood.Megahed.2019">[[Hend Dawood]] and [[Nefertiti Megahed]] (2019). A Consistent and Categorical Axiomatization of Differentiation Arithmetic Applicable to First and Higher Order Derivatives. Punjab University Journal of Mathematics. 51(11). pp. 77-100. doi: 10.5281/zenodo.3479546. http://doi.org/10.5281/zenodo.3479546.</ref> is a set of techniques to evaluate the [[partial derivative]] of a function specified by a computer program. Automatic differentiation is a subtle and central tool to automate the simultaneous computation of the numerical values of arbitrarily complex functions and their derivatives with no need for the symbolic representation of the derivative, only the function rule or an algorithm thereof is required.<ref name="Dawood.Megahed.2023"/><ref name="Dawood.Megahed.2019"/> Auto-differentiation is thus neither numeric nor symbolic, nor is it a combination of both. It is also preferable to ordinary numerical methods: In contrast to the more traditional numerical methods based on finite differences, auto-differentiation is 'in theory' exact, and in comparison to symbolic algorithms, it is computationally inexpensive.<ref name="Dawood-attribution">{{Creative Commons text attribution notice|cc=bysa4|url=https://pmc.ncbi.nlm.nih.gov/articles/PMC10280627/|author(s)=Dawood and Megahed}}</ref><ref name="Dawood.Megahed.2023"/><ref name="Dawood.Dawood.2022">[[Hend Dawood]] and [[Yasser Dawood]] (2022). Interval Root Finding and Interval Polynomials: Methods and Applications in Science and Engineering. In S. Chakraverty, editor, Polynomial Paradigms: Trends and Applications in Science and Engineering, chapter 15. IOP Publishing. ISBN 978-0-7503-5065-5. doi: 10.1088/978-0-7503-5067-9ch15. URL https://doi.org/10.1088/978-0-7503-5067-9ch15.</ref>
 
Automatic differentiation exploits the fact that every computer calculation, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions ([[exponential function|exp]], [[natural logarithm|log]], [[sine|sin]], [[cosine|cos]], etc.). By applying the [[chain rule]] repeatedly to these operations, partial derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor of more arithmetic operations than the original program.
Automatic differentiation is not:
 
== Difference from other differentiation methods ==
[[Image:AutomaticDifferentiationNutshell.png|right|thumb|300px|Figure 1: How automatic differentiation relates to symbolic differentiation]]
Automatic differentiation is distinct from [[symbolic differentiation]] and [[numerical differentiation]].
* [[Symbolic differentiation]], nor
Symbolic differentiation faces the difficulty of converting a computer program into a single [[mathematical expression]] and can lead to inefficient code. Numerical differentiation (the method of finite differences) can introduce [[round-off error]]s in the [[discretization]] process and cancellation. Both of these classical methods have problems with calculating higher derivatives, where complexity and errors increase. Finally, both of these classical methods are slow at computing partial derivatives of a function with respect to ''many'' inputs, as is needed for [[gradient descent|gradient]]-based [[Optimization (mathematics)|optimization]] algorithms. Automatic differentiation solves all of these problems.
* [[Numerical differentiation]] (the method of finite differences).
 
== Applications ==
These classical methods run into problems: symbolic differentiation leads to inefficient code (unless carefully done) and faces the difficulty of converting a computer program into a single expression, while numerical differentiation can introduce [[round-off error]]s in the [[discretization]] process and cancellation. Both classical methods have problems with calculating higher derivatives, where the complexity and errors increase. Finally, both classical methods are slow at computing the partial derivatives of a function with respect to ''many'' inputs, as is needed for [[gradient descent|gradient]]-based [[Optimization (mathematics)|optimization]] algorithms. Automatic differentiation solves all of these problems, at the expense of introducing more [[Software dependency|software dependencies]].
Currently, for its efficiency and accuracy in computing first and higher order [[derivative]]s, auto-differentiation is a celebrated technique with diverse applications in [[scientific computing]] and [[mathematics]]. It should therefore come as no surprise that there are numerous computational implementations of auto-differentiation. Among these, one mentions [[INTLAB]], Sollya, and InCLosure.<ref name="Rump.1999">[[Siegfried M. Rump]] (1999). INTLAB–INTerval LABoratory. In T. Csendes, editor, Developments in Reliable Computing, pages 77–104. Kluwer Academic Publishers, Dordrecht.</ref><ref name="Chevillard.Joldes.Lauter.2010">S. Chevillard, M. Joldes, and C. Lauter. Sollya (2010). An Environment for the Development of Numerical Codes. In K. Fukuda, J. van der Hoeven, M. Joswig, and N. Takayama, editors, Mathematical Software - ICMS 2010, volume 6327 of Lecture Notes in Computer Science, pages 28–31, Heidelberg, Germany. Springer.</ref><ref name="Dawood.Inc4.2022">[[Hend Dawood]] (2022). [[InCLosure]] (Interval enCLosure)–A Language and Environment for Reliable Scientific Computing. Computer Software, Version 4.0. Department of Mathematics. Faculty of Science, Cairo University, Giza, Egypt, September 2022. url: https://doi.org/10.5281/zenodo.2702404.</ref> In practice, there are two types (modes) of algorithmic differentiation: a forward-type and a reversed-type.<ref name="Dawood.Megahed.2023" /><ref name="Dawood.Megahed.2019" /> Presently, the two types are highly correlated and complementary and both have a wide variety of applications in, e.g., non-linear [[optimization]], [[sensitivity analysis]], [[robotics]], [[machine learning]], [[computer graphics]], and [[computer vision]].<ref name="Dawood-attribution"/><ref name="Fries.2019">Christian P. Fries (2019). Stochastic Automatic Differentiation: Automatic Differentiation for Monte-Carlo Simulations. Quantitative Finance, 19(6):1043–1059. doi: 10.1080/14697688.2018.1556398. url: https://doi.org/10.1080/14697688.2018.1556398.</ref><ref name="Dawood.Megahed.2023"/><ref name="Dawood.Megahed.2019"/><ref name="Dawood.Dawood.2020">[[Hend Dawood]] and [[Yasser Dawood]] (2020). Universal Intervals: Towards a Dependency-Aware Interval Algebra. In S. Chakraverty, editor, Mathematical Methods in Interdisciplinary Sciences. chapter 10, pages 167–214. John Wiley & Sons, Hoboken, New Jersey. ISBN 978-1-119-58550-3. doi: 10.1002/9781119585640.ch10. url: https://doi.org/10.1002/9781119585640.ch10.</ref><ref name="Dawood.2014">[[Hend Dawood]] (2014). Interval Mathematics as a Potential Weapon against Uncertainty. In S. Chakraverty, editor, Mathematics of Uncertainty Modeling in the Analysis of Engineering and Science Problems. chapter 1, pages 1–38. IGI Global, Hershey, PA. ISBN 978-1-4666-4991-0.</ref> Automatic differentiation is particularly important in the field of [[machine learning]]. For example, it allows one to implement [[backpropagation]] in a [[neural network (machine learning)|neural network]] without a manually-computed derivative.
 
== The chain rule, forwardForward and reverse accumulation ==
=== Chain rule of partial derivatives of composite functions ===
Fundamental to automatic differentiation is the decomposition of differentials provided by the [[chain rule]] of [[partial derivative]]s of [[function composition|composite functions]]. For the simple composition
<math display="block">\begin{align}
y &= f(g(h(x))) = f(g(h(w_0))) = f(g(w_1)) = f(w_2) = w_3 \\
w_0 &= x \\
w_1 &= h(w_0) \\
w_2 &= g(w_1) \\
w_3 &= f(w_2) = y
\end{align}</math>
the chain rule gives
<math display="block">\frac{\partial y}{\partial x} = \frac{\partial y}{\partial w_2} \frac{\partial w_2}{\partial w_1} \frac{\partial w_1}{\partial x} = \frac{\partial f(w_2)}{\partial w_2} \frac{\partial g(w_1)}{\partial w_1} \frac{\partial h(w_0)}{\partial x}</math>
 
=== Two types of automatic differentiation ===
Fundamental to AD is the decomposition of differentials provided by the [[chain rule]]. For the simple composition {{math|''y'' {{=}} ''g''(''h''(''x'')) {{=}} ''g''(''w'')}} the chain rule gives
Usually, two distinct modes of automatic differentiation are presented.
* '''forward accumulation''' (also called '''bottom-up''', '''forward mode''', or '''tangent mode''')
* '''reverse accumulation''' (also called '''top-down''', '''reverse mode''', or '''adjoint mode''')
Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first compute <math>\partial w_1/ \partial x</math> and then <math>\partial w_2/\partial w_1</math> and lastly <math>\partial y/\partial w_2</math>), while reverse accumulation traverses from outside to inside (first compute <math>\partial y/\partial w_2</math> and then <math>\partial w_2/\partial w_1</math> and lastly <math>\partial w_1/\partial x</math>). More succinctly,
* Forward accumulation computes the recursive relation: <math>\frac{\partial w_i}{\partial x} = \frac{\partial w_i}{\partial w_{i-1}} \frac{\partial w_{i-1}}{\partial x}</math> with <math>w_3 = y</math>, and,
* Reverse accumulation computes the recursive relation: <math>\frac{\partial y}{\partial w_i} = \frac{\partial y}{\partial w_{i+1}} \frac{\partial w_{i+1}}{\partial w_{i}}</math> with <math>w_0 = x</math>.
 
The value of the partial derivative, called the ''seed'', is propagated forward or backward and is initially <math>\frac{\partial x}{\partial x}=1</math> or <math>\frac{\partial y}{\partial y}=1</math>. Forward accumulation evaluates the function and calculates the derivative with respect to one independent variable in one pass. For each independent variable <math>x_1,x_2,\dots,x_n</math> a separate pass is therefore necessary in which the derivative with respect to that independent variable is set to one (<math>\frac{\partial x_1}{\partial x_1}=1</math>) and of all others to zero (<math>\frac{\partial x_2}{\partial x_1}= \dots = \frac{\partial x_n}{\partial x_1} = 0</math>). In contrast, reverse accumulation requires the evaluated partial functions for the partial derivatives. Reverse accumulation therefore evaluates the function first and calculates the derivatives with respect to all independent variables in an additional pass.
:<math>\frac{dy}{dx} = \frac{dy}{dw} \frac{dw}{dx}</math>
 
Which of these two types should be used depends on the sweep count. The [[Computational complexity theory|computational complexity]] of one sweep is proportional to the complexity of the original code.
Usually, two distinct modes of AD are presented, '''forward accumulation''' (or '''forward mode''') and '''reverse accumulation''' (or '''reverse mode'''). Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first one computes {{math|''dw''/''dx''}} and then {{math|''dy''/''dw''}}, while reverse accumulation has the traversal from outside to inside.
* Forward accumulation is more efficient than reverse accumulation for functions {{math|''f'' : '''R'''<sup>''n''</sup> → '''R'''<sup>''m''</sup>}} with {{math|''n'' ≪ ''m''}} as only {{math|''n''}} sweeps are necessary, compared to {{math|''m''}} sweeps for reverse accumulation.
* Reverse accumulation is more efficient than forward accumulation for functions {{math|''f'' : '''R'''<sup>''n''</sup> → '''R'''<sup>''m''</sup>}} with {{math|''n'' ≫ ''m''}} as only {{math|''m''}} sweeps are necessary, compared to {{math|''n''}} sweeps for forward accumulation.
[[Backpropagation]] of errors in multilayer perceptrons, a technique used in [[machine learning]], is a special case of reverse accumulation.<ref name="baydin2018automatic" />
 
Forward accumulation was introduced by R.E. Wengert in 1964.<ref name="Wengert1964"/> According to Andreas Griewank, reverse accumulation has been suggested since the late 1960s, but the inventor is unknown.<ref name="grie2012">{{cite book |last=Griewank |first=Andreas |title=Optimization Stories |chapter=Who invented the reverse mode of differentiation? |year=2012 |series=Documenta Mathematica Series |volume= 6|pages=389–400 |doi=10.4171/dms/6/38 |doi-access=free |isbn=978-3-936609-58-5 |chapter-url=https://ftp.gwdg.de/pub/misc/EMIS/journals/DMJDMV/vol-ismp/52_griewank-andreas-b.pdf }}</ref> [[Seppo Linnainmaa]] published reverse accumulation in 1976.<ref name="lin1976">{{cite journal |last=Linnainmaa |first=Seppo |year=1976 |title=Taylor Expansion of the Accumulated Rounding Error |journal=BIT Numerical Mathematics |volume=16 |issue=2 |pages=146–160 |doi=10.1007/BF01931367 |s2cid=122357351 }}</ref>
=== Forward accumulation ===
 
[[Image:ForwardAccumulationAutomaticDifferentiation.png|right|thumb|300px|Figure 2: Example of forward accumulation with computational graph]]
 
In forward accumulation AD, one first fixes the ''independent variable'' to which differentiation is performed and computes the derivative of each sub-[[expression (mathematics)|expression]] recursively. In a pen-and-paper calculation, one can do so by repeatedly substituting the derivative of the ''inner'' functions in the chain rule:
 
:<math>\frac{\partial y}{\partial x}
= \frac{\partial y}{\partial w_1} \frac{\partial w_1}{\partial x}
= \frac{\partial y}{\partial w_1} \left(\frac{\partial w_1}{\partial w_2} \frac{\partial w_2}{\partial x}\right)
= \frac{\partial y}{\partial w_1} \left(\frac{\partial w_1}{\partial w_2} \left(\frac{\partial w_2}{\partial w_3} \frac{\partial w_3}{\partial x}\right)\right)
= \cdots</math>
 
=== Forward accumulation ===
[[File:ForwardAD.png|thumb|Forward accumulation]]
In forward accumulation AD, one first fixes the ''independent variable'' with respect to which differentiation is performed and computes the derivative of each sub-[[expression (mathematics)|expression]] recursively. In a pen-and-paper calculation, this involves repeatedly substituting the derivative of the ''inner'' functions in the chain rule:
<math display="block">\begin{align}
\frac{\partial y}{\partial x}
&= \frac{\partial y}{\partial w_{n-1}} \frac{\partial w_{n-1}}{\partial x} \\[6pt]
&= \frac{\partial y}{\partial w_{n-1}} \left(\frac{\partial w_{n-1}}{\partial w_{n-2}} \frac{\partial w_{n-2}}{\partial x}\right) \\[6pt]
&= \frac{\partial y}{\partial w_{n-1}} \left(\frac{\partial w_{n-1}}{\partial w_{n-2}} \left(\frac{\partial w_{n-2}}{\partial w_{n-3}} \frac{\partial w_{n-3}}{\partial x}\right)\right) \\[6pt]
&= \cdots
\end{align}</math>
This can be generalized to multiple variables as a matrix product of [[Jacobian matrix and determinant|Jacobian]]s.
 
Compared to reverse accumulation, forward accumulation is very natural and easy to implement as the flow of derivative information coincides with the order of evaluation. One simply augments eachEach variable {{<math|''w''}}>w_i</math> is augmented with its derivative {{<math>\dot w_i</math|''ẇ''}}> (stored as a numerical value, not a symbolic expression),
<math display="block">\dot w_i = \frac{\partial w_i}{\partial x}</math>
as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule.
 
Using the chain rule, if <math>w_i</math> has predecessors in the computational graph:
:<math>\dot w = \frac{\partial w}{\partial x}</math>
:<math>\dot w_i = \sum_{j \in \{\text{predecessors of i}\}} \frac{\partial w_i}{\partial w_j} \dot w_j</math>
 
[[Image:ForwardAccumulationAutomaticDifferentiation.png|right|thumb|300px|Figure 2: Example of forward accumulation with computational graph]]
as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule.
 
As an example, consider the function:
<math display="block">\begin{align}
 
y
:<math>\begin{align}
z
&= f(x_1, x_2) \\
&= x_1 x_2 + \sin x_1 \\
Line 46 ⟶ 72:
&= w_5
\end{align}</math>
For clarity, the individual sub-expressions have been labeled with the variables <math>w_i</math>.
 
The choice of the independent variable to which differentiation is performed affects the ''seed'' values {{math|''ẇ''<sub>1</sub>}} and {{math|''ẇ''<sub>2</sub>}}. Given interest in the derivative of this function with respect to {{math|''x''<sub>1</sub>}}, the seed values should be set to:
For clarity, the individual sub-expressions have been labeled with the variables {{math|''w''<sub>''i''</sub>}}.
<math display="block">\begin{align}
 
\dot w_1 = \frac{\partial w_1}{\partial x_1} = \frac{\partial x_1}{\partial x_1} = 1 \\
The choice of the independent variable to which differentiation is performed affects the ''seed'' values {{math|''ẇ''<sub>1</sub>}} and {{math|''ẇ''<sub>2</sub>}}. Suppose one is interested in the derivative of this function with respect to {{math|''x''<sub>1</sub>}}. In this case, the seed values should be set to:
\dot w_2 = \frac{\partial w_2}{\partial x_1} = \frac{\partial x_2}{\partial x_1} = 0
 
: <math>\begin{align}
\dot w_1 = \frac{\partial x_1}{\partial x_1} = 1 \\
\dot w_2 = \frac{\partial x_2}{\partial x_1} = 0
\end{align}</math>
 
With the seed values set, onethe may thenvalues propagate the values using the chain rule as shown in both the table below. Figure 2 shows a pictorial depiction of this process as a computational graph.
:{| class="wikitable"
!Operations to compute value !!Operations to compute derivative
|-
|<math>w_1 = x_1</math> || <math>\dot w_1 = 1</math> (seed)
|-
|<math>w_2 = x_2</math> || <math>\dot w_2 = 0</math> (seed)
|-
|<math>w_3 = w_1 \cdot w_2</math> || <math>\dot w_3 = w_2 \cdot \dot w_1 + w_1 \cdot \dot w_2</math>
|-
|<math>w_4 = \sin w_1</math> || <math>\dot w_4 = \cos w_1 \cdot \dot w_1</math>
|-
|<math>w_5 = w_3 + w_4</math> || <math>\dot w_5 = \dot w_3 + \dot w_4</math>
|}
 
To compute the [[gradient]] of this example function, which requires not only <math>\tfrac{\partial y}{\partial x_1}</math> but also <math>\tfrac{\partial y}{\partial x_2}</math>, an ''additional'' sweep is performed over the computational graph using the seed values <math>\dot w_1 = 0; \dot w_2 = 1</math>.
:<math>\begin{array}{l|l}
\text{Operations to compute value} &
\text{Operations to compute derivative}
\\
\hline
w_1 = x_1 &
\dot w_1 = 1 \text{ (seed)}
\\
w_2 = x_2 &
\dot w_2 = 0 \text{ (seed)}
\\
w_3 = w_1 \cdot w_2 &
\dot w_3 = w_2 \cdot \dot w_1 + w_1 \cdot \dot w_2
\\
w_4 = \sin w_1 &
\dot w_4 = \cos w_1 \cdot \dot w_1
\\
w_5 = w_3 + w_4 &
\dot w_5 = \dot w_3 + \dot w_4
\end{array}</math>
 
==== Implementation ====
To compute the [[gradient]] of this example function, which requires the derivatives of {{math|''f''}} with respect to not only {{math|''x''<sub>1</sub>}} but also {{math|''x''<sub>2</sub>}}, one must perform an ''additional'' sweep over the computational graph using the seed values <math>\dot w_1 = 0; \dot w_2 = 1</math>.
===== Pseudocode =====
Forward accumulation calculates the function and the derivative (but only for one independent variable each) in one pass. The associated method call expects the expression ''Z'' to be derived with regard to a variable ''V''. The method returns a pair of the evaluated function and its derivative. The method traverses the expression tree recursively until a variable is reached. If the derivative with respect to this variable is requested, its derivative is 1, 0 otherwise. Then the partial function as well as the partial derivative are evaluated.<ref name=demm22>{{cite book|author = Maximilian E. Schüle, Maximilian Springer, [[Alfons Kemper]], [[Thomas Neumann]] |title=Proceedings of the Sixth Workshop on Data Management for End-To-End Machine Learning |chapter=LLVM code optimisation for automatic differentiation |date=2022|pages=1–4 |doi = 10.1145/3533028.3533302|isbn=9781450393751 |s2cid=248853034 |language=English}}</ref>
<syntaxhighlight lang="cpp">
tuple<float,float> evaluateAndDerive(Expression Z, Variable V) {
if isVariable(Z)
if (Z = V) return {valueOf(Z), 1};
else return {valueOf(Z), 0};
else if (Z = A + B)
{a, a'} = evaluateAndDerive(A, V);
{b, b'} = evaluateAndDerive(B, V);
return {a + b, a' + b'};
else if (Z = A - B)
{a, a'} = evaluateAndDerive(A, V);
{b, b'} = evaluateAndDerive(B, V);
return {a - b, a' - b'};
else if (Z = A * B)
{a, a'} = evaluateAndDerive(A, V);
{b, b'} = evaluateAndDerive(B, V);
return {a * b, b * a' + a * b'};
}
</syntaxhighlight>
 
===== C++ =====
The [[Computational complexity theory|computational complexity]] of one sweep of forward accumulation is proportional to the complexity of the original code.
<syntaxhighlight lang="cpp">
 
#include <iostream>
Forward accumulation is more efficient than reverse accumulation for functions {{math|''f'' : ℝ<sup>''n''</sup> → ℝ<sup>''m''</sup>}} with {{math|''m'' ≫ ''n''}} as only {{math|''n''}} sweeps are necessary, compared to {{math|''m''}} sweeps for reverse accumulation.
struct ValueAndPartial { float value, partial; };
struct Variable;
struct Expression {
virtual ValueAndPartial evaluateAndDerive(Variable *variable) = 0;
};
struct Variable: public Expression {
float value;
Variable(float value): value(value) {}
ValueAndPartial evaluateAndDerive(Variable *variable) {
float partial = (this == variable) ? 1.0f : 0.0f;
return {value, partial};
}
};
struct Plus: public Expression {
Expression *a, *b;
Plus(Expression *a, Expression *b): a(a), b(b) {}
ValueAndPartial evaluateAndDerive(Variable *variable) {
auto [valueA, partialA] = a->evaluateAndDerive(variable);
auto [valueB, partialB] = b->evaluateAndDerive(variable);
return {valueA + valueB, partialA + partialB};
}
};
struct Multiply: public Expression {
Expression *a, *b;
Multiply(Expression *a, Expression *b): a(a), b(b) {}
ValueAndPartial evaluateAndDerive(Variable *variable) {
auto [valueA, partialA] = a->evaluateAndDerive(variable);
auto [valueB, partialB] = b->evaluateAndDerive(variable);
return {valueA * valueB, valueB * partialA + valueA * partialB};
}
};
int main () {
// Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3)
Variable x(2), y(3);
Plus p1(&x, &y); Multiply m1(&x, &p1); Multiply m2(&y, &y); Plus z(&m1, &m2);
float xPartial = z.evaluateAndDerive(&x).partial;
float yPartial = z.evaluateAndDerive(&y).partial;
std::cout << "∂z/∂x = " << xPartial << ", "
<< "∂z/∂y = " << yPartial << std::endl;
// Output: ∂z/∂x = 7, ∂z/∂y = 8
return 0;
}
</syntaxhighlight>
 
=== Reverse accumulation ===
[[File:AutoDiff.webp|thumb|Reverse accumulation]]
In reverse accumulation AD, the ''dependent variable'' to be differentiated is fixed and the derivative is computed ''with respect to'' each sub-[[expression (mathematics)|expression]] recursively. In a pen-and-paper calculation, the derivative of the ''outer'' functions is repeatedly substituted in the chain rule:
<math display="block">\begin{align}
\frac{\partial y}{\partial x}
&= \frac{\partial y}{\partial w_1} \frac{\partial w_1}{\partial x}\\
&= \left(\frac{\partial y}{\partial w_2} \frac{\partial w_2}{\partial w_1}\right) \frac{\partial w_1}{\partial x}\\
&= \left(\left(\frac{\partial y}{\partial w_3} \frac{\partial w_3}{\partial w_2}\right) \frac{\partial w_2}{\partial w_1}\right) \frac{\partial w_1}{\partial x}\\
&= \cdots
\end{align}</math>
 
In reverse accumulation, the quantity of interest is the ''adjoint'', denoted with a bar <math>\bar w_i</math>; it is a derivative of a chosen dependent variable with respect to a subexpression <math>w_i</math>:
[[Image:ReverseaccumulationAD.png|right|thumb|300px|Figure 3: Example of reverse accumulation with computational graph]]
<math display="block">\bar w_i = \frac{\partial y}{\partial w_i}</math>
 
Using the chain rule, if <math>w_i</math> has successors in the computational graph:
In reverse accumulation AD, one first fixes the ''dependent variable'' to be differentiated and computes the derivative ''with respect to'' each sub-[[expression (mathematics)|expression]] recursively. In a pen-and-paper calculation, one can perform the equivalent by repeatedly substituting the derivative of the ''outer'' functions in the chain rule:
:<math>\bar w_i = \sum_{j \in \{\text{successors of i}\}} \bar w_j \frac{\partial w_j}{\partial w_i}</math>
 
Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is scalar-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed to calculate the (two-component) gradient. This is only [[space–time tradeoff|half the work]] when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables {{math|''w''<sub>''i''</sub>}} as well as the instructions that produced them in a data structure known as a "tape" or a Wengert list<ref>{{cite journal|last1=Bartholomew-Biggs|first1=Michael| last2=Brown|first2=Steven|last3=Christianson|first3=Bruce|last4=Dixon|first4=Laurence|date=2000|title=Automatic differentiation of algorithms|journal=Journal of Computational and Applied Mathematics| volume=124|issue=1–2|pages=171–190| doi=10.1016/S0377-0427(00)00422-2|bibcode=2000JCoAM.124..171B|hdl=2299/3010|hdl-access=free}}</ref> (however, Wengert published forward accumulation, not reverse accumulation<ref name="Wengert1964">{{cite journal|author=R.E. Wengert|title=A simple automatic derivative evaluation program|journal=Comm. ACM|volume=7
:<math>\frac{\partial y}{\partial x}
|issue=8|year=1964|pages=463–464|doi=10.1145/355586.364791|s2cid=24039274|doi-access=free}}</ref>), which may consume significant memory if the computational graph is large. This can be mitigated to some extent by storing only a subset of the intermediate variables and then reconstructing the necessary work variables by repeating the evaluations, a technique known as [[rematerialization]]. [[checkpointing scheme|Checkpointing]] is also used to save intermediary states.
= \frac{\partial y}{\partial w_1} \frac{\partial w_1}{\partial x}
= \left(\frac{\partial y}{\partial w_2} \frac{\partial w_2}{\partial w_1}\right) \frac{\partial w_1}{\partial x}
= \left(\left(\frac{\partial y}{\partial w_3} \frac{\partial w_3}{\partial w_2}\right) \frac{\partial w_2}{\partial w_1}\right) \frac{\partial w_1}{\partial x}
= \cdots</math>
 
[[Image:ReverseaccumulationAD.png|right|thumb|300px|Figure 3: Example of reverse accumulation with computational graph]]
In reverse accumulation, the quantity of interest is the ''adjoint'', denoted with a bar ({{math|''w̄''}}); it is a derivative of a chosen dependent variable with respect to a subexpression {{math|''w''}}:
 
:<math>\bar w = \frac{\partial y}{\partial w}</math>
 
Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is real-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed in order to calculate the (two-component) gradient. This is only [[time-space tradeoff|half the work]] when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables {{math|''w''<sub>''i''</sub>}} as well as the instructions that produced them in a data structure known as a [[Wengert list]] (or "tape"),<ref>{{cite journal
|last=Bartholomew-Biggs
|first=Michael
|last2=Brown
|first2=Steven
|last3=Christianson
|first3=Bruce
|last4=Dixon
|first4=Laurence
|date=2000
|title=Automatic differentiation of algorithms
|url=http://ac.els-cdn.com/S0377042700004222/1-s2.0-S0377042700004222-main.pdf?_tid=61070b3c-31f1-11e5-a550-00000aacb35f&acdnat=1437735029_c639352923bb07e65307eb75c420d42f
|journal=Journal of Computational and Applied Mathematics
|volume=124
|issue=1-2
|pages=171-190
|doi=10.1016/S0377-0427(00)00422-2
}}</ref> which may represent a significant memory issue if the computational graph is large. This can be mitigated to some extent by storing only a subset of the intermediate variables and then reconstructing the necessary work variables by repeating the evaluations, a technique known as [[checkpointing scheme|checkpointing]].
 
The operations to compute the derivative using reverse accumulation are shown in the table below (note the reversed order):
{{block indent|
:<math>\begin{array}{l}
\text{; Operations to compute derivative}
:<math>\bar w_5 = 1 \text{ (seed)}</math>
\\ \hline
:<math>\bar w_5w_4 = 1\bar w_5 \text{cdot (seed)}1</math>
:<math>\bar w_3 = \bar w_5 \cdot 1</math>
\\
:<math>\bar w_4w_2 = \bar w_5w_3 \cdot w_1</math>
:<math>\bar w_1 = \bar w_3 \cdot w_2 + \bar w_4 \cdot \cos w_1</math>
\\
}}
\bar w_3 = \bar w_5
The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint;{{efn|In terms of weight matrices, the adjoint is the [[transpose]]. Addition is the [[covector]] <math>[1 \cdots 1]</math>, since <math>[1 \cdots 1]\left[\begin{smallmatrix}x_1 \\ \vdots \\ x_n \end{smallmatrix}\right] = x_1 + \cdots + x_n,</math> and fanout is the vector <math>\left[\begin{smallmatrix}1 \\ \vdots \\ 1 \end{smallmatrix}\right],</math> since <math>\left[\begin{smallmatrix}1 \\ \vdots \\ 1 \end{smallmatrix}\right][x] = \left[\begin{smallmatrix}x \\ \vdots \\ x \end{smallmatrix}\right].</math>}} a [[unary operation|unary]] function {{math|1=''y'' = ''f''(''x'')}} in the primal causes {{math|1=''x̄'' = ''ȳ'' ''f''′(''x'')}} in the adjoint; etc.
\\
\bar w_2 = \bar w_3 \cdot w_1
\\
\bar w_1 = \bar w_3 \cdot w_2 + \bar w_4 \cdot \cos w_1
\end{array}
</math>
 
The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a [[unary operation|unary]] function {{math|''y'' {{=}} ''f''(''x'')}} in the primal causes {{math|''x̄'' {{=}} ''ȳ'' ''f′''(''x'')}} in the adjoint; etc.
 
Reverse accumulation is more efficient than forward accumulation for functions {{math|''f'' : ℝ<sup>''n''</sup> → ℝ<sup>''m''</sup>}} with {{math|''m'' ≪ ''n''}} as only {{math|''m''}} sweeps are necessary, compared to {{math|''n''}} sweeps for forward accumulation.
 
==== Implementation ====
Reverse mode AD was first published in 1970 by [[Seppo Linnainmaa]] in his master thesis.<ref name="lin1970">Linnainmaa, S. (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6-7.</ref><ref name="lin1976">Linnainmaa, S. (1976). Taylor expansion of the accumulated rounding error. BIT Numerical Mathematics, 16(2), 146-160.</ref><ref name="grie2012">Griewank, A. (2012). Who Invented the Reverse Mode of Differentiation?. Optimization Stories, Documenta Matematica, Extra Volume ISMP (2012), 389-400.</ref>
===== Pseudo code =====
Reverse accumulation requires two passes: In the forward pass, the function is evaluated first and the partial results are cached. In the reverse pass, the partial derivatives are calculated and the previously derived value is backpropagated. The corresponding method call expects the expression ''Z'' to be derived and ''seeded'' with the derived value of the parent expression. For the top expression, Z differentiated with respect to Z, this is 1. The method traverses the expression tree recursively until a variable is reached and adds the current ''seed'' value to the derivative expression.<ref name=ssdbm21>{{cite book|author= Maximilian E. Schüle, Harald Lang, Maximilian Springer, [[Alfons Kemper]], [[Thomas Neumann]], Stephan Günnemann|title=33rd International Conference on Scientific and Statistical Database Management |chapter=In-Database Machine Learning with SQL on GPUs |date=2021|pages=25–36 |doi = 10.1145/3468791.3468840|isbn=9781450384131 |s2cid=235386969 |language=English}}</ref><ref name=dpd>{{cite journal|author= Maximilian E. Schüle, Harald Lang, Maximilian Springer, [[Alfons Kemper]], [[Thomas Neumann]], Stephan Günnemann|title=Recursive SQL and GPU-support for in-database machine learning|journal=Distributed and Parallel Databases|date=2022|volume=40 |issue=2–3 |pages=205–259 |doi = 10.1007/s10619-022-07417-7|s2cid=250412395 |language=English|doi-access=free}}</ref>
<syntaxhighlight lang="cpp">
void derive(Expression Z, float seed) {
if isVariable(Z)
partialDerivativeOf(Z) += seed;
else if (Z = A + B)
derive(A, seed);
derive(B, seed);
else if (Z = A - B)
derive(A, seed);
derive(B, -seed);
else if (Z = A * B)
derive(A, valueOf(B) * seed);
derive(B, valueOf(A) * seed);
}
</syntaxhighlight>
 
===== C++ =====
[[Backpropagation]] of errors in multilayer perceptrons, a technique used in [[machine learning]], is a special case of reverse mode AD.
<syntaxhighlight lang="cpp">
#include <iostream>
struct Expression {
float value;
virtual void evaluate() = 0;
virtual void derive(float seed) = 0;
};
struct Variable: public Expression {
float partial;
Variable(float value) {
this->value = value;
partial = 0.0f;
}
void evaluate() {}
void derive(float seed) {
partial += seed;
}
};
struct Plus: public Expression {
Expression *a, *b;
Plus(Expression *a, Expression *b): a(a), b(b) {}
void evaluate() {
a->evaluate();
b->evaluate();
value = a->value + b->value;
}
void derive(float seed) {
a->derive(seed);
b->derive(seed);
}
};
struct Multiply: public Expression {
Expression *a, *b;
Multiply(Expression *a, Expression *b): a(a), b(b) {}
void evaluate() {
a->evaluate();
b->evaluate();
value = a->value * b->value;
}
void derive(float seed) {
a->derive(b->value * seed);
b->derive(a->value * seed);
}
};
int main () {
// Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3)
Variable x(2), y(3);
Plus p1(&x, &y); Multiply m1(&x, &p1); Multiply m2(&y, &y); Plus z(&m1, &m2);
z.evaluate();
std::cout << "z = " << z.value << std::endl;
// Output: z = 19
z.derive(1);
std::cout << "∂z/∂x = " << x.partial << ", "
<< "∂z/∂y = " << y.partial << std::endl;
// Output: ∂z/∂x = 7, ∂z/∂y = 8
return 0;
}
</syntaxhighlight>
 
=== Beyond forward and reverse accumulation ===
 
Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of {{math|''f'' : '''R'''<sup>''n''</sup> → '''R'''<sup>''m''</sup>}} with a minimum number of arithmetic operations is known as the ''optimal Jacobian accumulation'' (OJA) problem, which is [[NP-complete]].<ref>{{Cite journal|first=Uwe|last=Naumann|contribution=Optimal Jacobian accumulation is NP-complete|journal=Mathematical Programming|volume=112|issue=2|pages=427–441|date=April 2008|doi=10.1007/s10107-006-0042-z|title=Optimal Jacobian accumulation is NP-complete|postscriptciteseerx=<!--None-->10.1.1.320.5665|s2cid=30219572}}</ref> Central to this proof is the idea that algebraic dependencies may exist between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent.
Central to this proof is the idea that there may exist algebraic dependencies between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent.
 
== Automatic differentiation using dual numbers ==
 
Forward mode automatic differentiation is accomplished by augmenting the [[Algebra over a field|algebra]] of [[real numbers]] and obtaining a new [[arithmetic]]. An additional component is added to every number which willto represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. The augmented algebra is the algebra of [[dual numbers]].
 
Replace every number <math>\,x</math> with the number <math>x + x'\varepsilon</math>, where <math>x'</math> is a real number, but <math>\varepsilon</math> is an [[abstract number]] with the property <math>\varepsilon^2=0</math> (an [[infinitesimal]]; see ''[[Smooth infinitesimal analysis]]''). Using only this, we get for the regular arithmetic gives
:<math display="block">\begin{align}
(x + x'\varepsilon) + (y + y'\varepsilon) &= x + y + (x' + y')\varepsilon \\
(x + x'\varepsilon) - (y + y'\varepsilon) &= x - y + (x' - y')\varepsilon \\
(x + x'\varepsilon) \cdot (y + y'\varepsilon) &= xy + xy'\varepsilon + yx'\varepsilon + x'y'\varepsilon^2 = xy + (x y' + yx')\varepsilon \\
(x + x'\varepsilon) / (y + y'\varepsilon) &= (x/y + x'\varepsilon/y) / (1 + y'\varepsilon/y) = (x/y + x'\varepsilon/y) \cdot (1 - y'\varepsilon/y) = x/y + (x'/y - xy'/y^2)\varepsilon
\end{align}</math>
using <math>(1 + y'\varepsilon/y) \cdot (1 - y'\varepsilon/y) = 1</math>.
 
Now, [[polynomials]] can be calculated in this augmented arithmetic. If <math>P(x) = p_0 + p_1 x + p_2x^2 + \cdots + p_n x^n</math>, then
and likewise for subtraction and division.
<math display="block">\begin{align}
 
Now, we may calculate [[polynomials]] in this augmented arithmetic. If <math>P(x) = p_0 + p_1 x + p_2x^2 + \cdots + p_n x^n</math>, then
 
<math>\begin{align}
P(x + x'\varepsilon) &= p_0 + p_1(x + x'\varepsilon) + \cdots + p_n (x + x'\varepsilon)^n \\
&= p_0 + p_1 x + \cdots + p_n x^n + p_1x'\varepsilon + 2p_2xx'\varepsilon + \cdots + np_n x^{n-1} x'\varepsilon \\
&= P(x) + P^{(1)}(x)x'\varepsilon
\end{align}</math>
 
where <math>P^{(1)}</math> denotes the derivative of <math>P</math> with respect to its first argument, and <math>x'</math>, called a ''seed'', can be chosen arbitrarily.
 
The new arithmetic consists of [[ordered pair]]s, elements written <math>\langle x, x' \rangle</math>, with ordinary arithmetics on the first component, and first order differentiation arithmetic on the second component, as described above. Extending the above results on polynomials to [[analytic functions]] we obtaingives a list of the basic arithmetic and some standard functions for the new arithmetic:
:<math display="block">\begin{align}
\left\langle u,u'\right\rangle + \left\langle v,v'\right\rangle &= \left\langle u + v, u' + v' \right\rangle \\
\left\langle u,u'\right\rangle - \left\langle v,v'\right\rangle &= \left\langle u - v, u' - v' \right\rangle \\
Line 181 ⟶ 314:
\exp\left\langle u,u'\right\rangle &= \left\langle \exp u , u' \exp u \right\rangle \\
\log\left\langle u,u'\right\rangle &= \left\langle \log(u) , u'/u \right\rangle \quad (u>0) \\
\left\langle u,u'\right\rangle^k &= \left\langle u^k , u' k u^{k - 1} u' \right\rangle \quad (u \ne 0) \\
\left| \left\langle u,u'\right\rangle \right| &= \left\langle \left| u \right| , u' \mboxoperatorname{sign} u \right\rangle \quad (u \ne 0)
\end{align}</math>
and in general for the primitive function <math>g</math>,
:<math display="block">g(\langle u,u' \rangle , \langle v,v' \rangle ) = \langle g(u,v) , g_u(u,v) u' + g_v(u,v) v' \rangle</math>
where <math>g_u</math> and <math>g_v</math> are the derivatives of <math>g</math> with respect to its first and second arguments, respectively.
 
When a binary basic arithmetic operation is applied to mixed arguments—the pair <math>\langle u, u' \rangle</math> and the real number <math>c</math>—the real number is first lifted to <math>\langle c, 0 \rangle</math>. The derivative of a function <math>f : \mathbb{R}\rightarrowto\mathbb{R}</math> at the point <math>x_0</math> is now found by calculating <math>f(\langle x_0, 1 \rangle)</math> using the above arithmetic, which gives <math>\langle f ( x_0 ) , f' ( x_0 ) \rangle </math> as the result.
 
=== Implementation ===
An example implementation based on the dual number approach follows.
 
==== Pseudo code ====
{{pre|1=
Dual plus(Dual A, Dual B) {
return {
realPartOf(A) + realPartOf(B),
infinitesimalPartOf(A) + infinitesimalPartOf(B)
};
}
Dual minus(Dual A, Dual B) {
return {
realPartOf(A) - realPartOf(B),
infinitesimalPartOf(A) - infinitesimalPartOf(B)
};
}
Dual multiply(Dual A, Dual B) {
return {
realPartOf(A) * realPartOf(B),
realPartOf(B) * infinitesimalPartOf(A) + realPartOf(A) * infinitesimalPartOf(B)
};
}
X = {x, 0};
Y = {y, 0};
Epsilon = {0, 1};
xPartial = infinitesimalPartOf(f(X + Epsilon, Y));
yPartial = infinitesimalPartOf(f(X, Y + Epsilon));
}}
 
==== C++ ====
<syntaxhighlight lang="cpp">
#include <iostream>
struct Dual {
float realPart, infinitesimalPart;
Dual(float realPart, float infinitesimalPart=0): realPart(realPart), infinitesimalPart(infinitesimalPart) {}
Dual operator+(Dual other) {
return Dual(
realPart + other.realPart,
infinitesimalPart + other.infinitesimalPart
);
}
Dual operator*(Dual other) {
return Dual(
realPart * other.realPart,
other.realPart * infinitesimalPart + realPart * other.infinitesimalPart
);
}
};
// Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3)
Dual f(Dual x, Dual y) { return x * (x + y) + y * y; }
int main () {
Dual x = Dual(2);
Dual y = Dual(3);
Dual epsilon = Dual(0, 1);
Dual a = f(x + epsilon, y);
Dual b = f(x, y + epsilon);
std::cout << "∂z/∂x = " << a.infinitesimalPart << ", "
<< "∂z/∂y = " << b.infinitesimalPart << std::endl;
// Output: ∂z/∂x = 7, ∂z/∂y = 8
return 0;
}
</syntaxhighlight>
 
===Vector arguments and functions===
 
Multivariate functions can be handled with the same efficiency and mechanisms as univariate functions by adopting a directional derivative operator. That is, if it is sufficient to compute <math>y' = \nabla f(x)\cdot x'</math>, the directional derivative <math>y' \in \mathbb{R}^m</math> of <math>f:\mathbb{R}^n\rightarrowto\mathbb{R}^m</math> at <math>x \in \mathbb{R}^n</math> in the direction <math>x' \in \mathbb{R}^n</math>, this may be calculated as <math>(\langle y_1,y'_1\rangle, \ldots, \langle y_m,y'_m\rangle) = f(\langle x_1,x'_1\rangle, \ldots, \langle x_n,x'_n\rangle)</math> using the same arithmetic as above. If all the elements of <math>\nabla f</math> are desired, then <math>n</math> function evaluations are required. Note that in many optimization applications, the directional derivative is indeed sufficient.
 
===High order and many variables===
 
The above arithmetic can be generalized to calculate second order and higher derivatives of multivariate functions. However, the arithmetic rules quickly grow very complicated: complexity will beis quadratic in the highest derivative degree. Instead, [http://darioizzo.github.io/audi/theory.html truncated [[Taylor series|Taylor polynomial]] algebra ] can be used. The resulting arithmetic, defined on generalized dual numbers, allows toefficient efficiently computecomputation using functions as if they were a new data type. Once the Taylor polynomial of a function is known, the derivatives are easily extracted. Currently there are a few software projects that implement the Taylor polynomial truncated algebra. [http://darioizzo.github.io/audi/index.html AuDi], [https://ctaylor.codeplex.com/SourceControl/latest CTaylor] and [http://bt.pa.msu.edu/index_cosy.htm COSY infinity]. Only the first two are open source.
 
== Implementation ==
 
Forward-mode AD is implemented by a [[nonstandard interpretation]] of the program in which real numbers are replaced by dual numbers, constants are lifted to dual numbers with a zero epsilon coefficient, and the numeric primitives are lifted to operate on dual numbers. This nonstandard interpretation is generally implemented using one of two strategies: ''source code transformation'' or ''operator overloading''.
 
=== Source code transformation (SCT) ===
Line 207 ⟶ 404:
The source code for a function is replaced by an automatically generated source code that includes statements for calculating the derivatives interleaved with the original instructions.
 
[[Source code transformation]] can be implemented for all programming languages, and it is also easier for the compiler to do compile time optimizations. However, the implementation of the AD tool itself is more difficult and the build system is more complex.
 
=== Operator overloading (OO) ===
 
[[Image:OperatorOverloadingAutomaticDifferentiation.png|thumb|right|300px|Figure 5: Example of how operator overloading could work]]
[[Operator overloading]] is a possibility for source code written in a language supporting it. Objects for real numbers and elementary mathematical operations must be overloaded to cater for the augmented arithmetic depicted above. This requires no change in the form or sequence of operations in the original source code for the function to be differentiated, but often requires changes in basic data types for numbers and vectors to support overloading and often also involves the insertion of special flagging operations. Due to the inherent operator overloading overhead on each loop, this approach usually demonstrates weaker speed performance.
 
=== Operator overloading and source code transformation ===
Operator overloading for forward accumulation is easy to implement, and also possible for reverse accumulation. However, current compilers lag behind in optimizing the code when compared to forward accumulation.
Overloaded Operators can be used to extract the valuation graph, followed by automatic generation of the AD-version of the primal function at run-time. Unlike the classic OO AAD{{Clarify|date=July 2025}}, such AD-function does not change from one iteration to the next one. Hence there is any OO or tape interpretation run-time overhead per Xi sample.
 
With the AD-function being generated at runtime, it can be optimised to take into account the current state of the program and precompute certain values. In addition, it can be generated in a way to consistently utilize native CPU vectorization to process 4(8)-double chunks of user data (AVX2\AVX512 speed up x4-x8). With multithreading added into account, such approach can lead to a final acceleration of order 8 × #Cores compared to the traditional AAD tools. A reference implementation is available on GitHub.<ref>{{Cite web|url=https://github.com/matlogica/aadc-prototype|title=AADC Prototype Library|date=June 22, 2022|via=GitHub}}</ref>
== Software ==
* '''C/C++'''
:{| class="wikitable" border=1
|-
! Package
! License
! Approach
! Brief info
|-
| [http://www.vivlabs.com/subpage_adc_v4.php ADC Version 4.0]
| nonfree
| OO
|
|-
| [http://www.met.rdg.ac.uk/clouds/adept/ Adept]
| [[Apache License|Apache]] 2.0
| OO
| First-order forward and reverse modes. Very fast due to its use of expression templates and an efficient tape structure.
|-
| [http://www.mcs.anl.gov/adic/ ADIC]
| free for noncommercial
| SCT
| forward mode
|-
| [[ADMB]]
| {{BSD-lic}}
| SCT+OO
| Uses a template approach. See http://admb-project.org/
|-
| [http://code.google.com/p/adnumber/ ADNumber]
| dual license
| OO
| arbitrary order forward/reverse
|-
| [http://www.coin-or.org/projects/ADOL-C.xml ADOL-C]
| CPL 1.0 or [[GPL]] 2.0
| OO
| arbitrary order forward/reverse, part of [[COIN-OR]]
|-
| [http://darioizzo.github.io/audi/ AuDi]
| GPL 2.0
| OO
| AuDI is an open source, header only, C++ library that allows for AUtomated DIfferentiation implementing the Taylor truncated polynomial algebra (forward mode automated differentiation). It was created with the aim to offer a generic solution to the user in need of an automated differentiation system. AuDi is thread-safe and, when possible, use of Piranha fine-grained parallelization of the truncated polynomial multiplication. The benefits of this fine grained parallelization are well visible for many variables and high orders.
|-
| [[AMPL]]
| free for students
| SCT
|
|-
| [http://www.fadbad.com/ FADBAD++]
| free for <br>noncommercial
| OO
| uses operator new
|-
| [http://www.casadi.org/ CasADi]
| [[LGPL]]
| SCT
| Forward/reverse modes, matrix-valued atomic operations.
|-
| [http://ceres-solver.org ceres-solver]
| [[BSD]]
| OO
| A portable C++ library that allows for modeling and solving large complicated nonlinear least squares problems
|-
| [http://www.coin-or.org/CppAD CppAD]
| [[Eclipse Public License|EPL]] 1.0 or [[GPL]] 3.0
| OO
| arbitrary order forward/reverse, AD<Base> for arbitrary Base including AD<Other_Base>, part of [[COIN-OR]]; can also be used to produce C source code using the [https://github.com/joaoleal/CppADCodeGen CppADCodeGen] library.
|-
| [http://eigen.tuxfamily.org/dox/unsupported/group__AutoDiff__Module.html Eigen Auto Diff]
| [[Mozilla Public License|MPL2]]
| OO
|
|-
| [http://www.mcs.anl.gov/OpenAD/ OpenAD]
| depends on components
| SCT
|
|-
| [http://trilinos.sandia.gov/packages/ Sacado]
| {{GPL-lic}}
| OO
| A part of the [http://trilinos.sandia.gov/ Trilinos] collection, forward/reverse modes.
|-
| [[Stan (software)]]
| {{BSD-lic}}
| OO
| forward- and reverse-mode automatic differentiation with library of special functions, probability functions, matrix operators, and linear algebra solvers
|-
| [http://www-sop.inria.fr/tropics/ TAPENADE]
| Free for noncommercial
| SCT
|
|-
| [https://ctaylor.codeplex.com/ CTaylor]
| free
| OO
| truncated taylor series, multi variable, high performance, calculating and storing only potentially nonzero derivatives, calculates higher order derivatives, order of derivatives increases when using matching operations until maximum order (parameter) is reached, example source code and executable available for testing performance
|-
| [https://github.com/wangmu0701/ReverseAD ReverseAD]
| free
| OO
| High order reverse mode which evaluates the high order derivative tensor directly instead of a forward/reverse modes hierarchy.
|}
 
==See also==
* '''Fortran'''
* [[Differentiable programming]]
:{| class="wikitable" border=1
|-
! Package
! License
! Approach
! Brief info
|-
| [http://www.vivlabs.com/subpage_adf_v4.php ADF Version 4.0]
| nonfree
| OO
|
|-
| [http://www-unix.mcs.anl.gov/autodiff/ADIFOR/ ADIFOR]
| [http://www.mcs.anl.gov/research/projects/adifor/AdiforPublicLicense.html >>>] <br>(free for non-commercial)
| SCT
|
|-
| [http://cpc.cs.qub.ac.uk/summaries/ADLS AUTO_DERIV]
| free for non-commercial
| OO
|-
| [http://www.mcs.anl.gov/OpenAD/ OpenAD]
| depends on components
| SCT
|
|-
| [http://fastopt.com/products/taf/taf.shtml TAF]
| nonfree
| SCT
|
|-
| [http://tapenade.inria.fr:8080/tapenade/index.jsp TAPENADE]
| Free for noncommercial
| SCT
|
|-
| [https://raullaasner.github.io/gadfit GADfit]
| Free ([[GPL]] 3)
| OO
| First (forward, reverse) and second (forward) order, principal use is nonlinear curve fitting, includes the differentiation of integrals
|}
 
==Notes==
* '''Matlab'''
{{Notelist}}
:{| class="wikitable" border=1
|-
! Package
! License
! Approach
! Brief info
|-
| [http://www.mathworks.com/matlabcentral/fileexchange/15235 AD for MATLAB]
| {{GPL-lic}}
| OO
| Forward (1st & 2nd derivative, Uses MEX files & Windows DLLs)
|-
| [http://www.mathworks.com/matlabcentral/fileexchange/26807-automatic-differentiation-with-matlab-objects Adiff]
| {{BSD-lic}}
| OO
| Forward (1st derivative)
|-
| [http://matlabad.com/ MAD]
| Proprietary
| OO
|Forward (1st derivative) full/sparse storage of derivatives
|-
| [http://www.adimat.de/ ADiMat]
| Proprietary
| SCT
| Forward (1st & 2nd derivative) & Reverse (1st), proprietary server side transform
|-
| [https://github.com/gaika/madiff MADiff]
| {{GPL-lic}}
| OO
| Reverse
|}
 
* '''Python'''
:{| class="wikitable" border=1
|-
! Package
! License
! Approach
! Brief info
|-
| [https://pypi.python.org/pypi/ad/ ad]
| {{BSD-lic}}
| OO
| first and second-order, reverse accumulation, transparent on-the-fly calculations, basic [[NumPy]] support, written in pure python
|-
| [[FuncDesigner]]
| {{BSD-lic}}
| OO
| uses [[NumPy]] arrays and [[SciPy]] sparse matrices,<br>also allows to solve linear/non-linear/ODE systems and <br> to perform numerical optimizations by [[OpenOpt]]
|-
| [http://dirac.cnrs-orleans.fr/ScientificPython/ ScientificPython]
| [[CeCILL]]
| OO
| see modules Scientific.Functions.FirstDerivatives and <br>Scientific.Functions.Derivatives
|-
| [http://www.seanet.com/~bradbell/pycppad/index.htm pycppad]
| {{BSD-lic}}
| OO
| arbitrary order forward/reverse, implemented as wrapper for CppAD including AD<double> and AD< AD<double> >.
|-
|[https://github.com/b45ch1/pyadolc pyadolc]
| {{BSD-lic}}
| OO
|wrapper for ADOL-C, hence arbitrary order derivatives in the (combined) forward/reverse mode of AD, supports sparsity pattern propagation and sparse derivative computations
|-
| [http://packages.python.org/uncertainties/ uncertainties]
| {{BSD-lic}}
| OO
| first-order derivatives, reverse mode, transparent calculations
|-
| [https://pypi.python.org/pypi/algopy algopy]
| {{BSD-lic}}
| OO
| same approach as pyadolc and thus compatible, support to differentiate through numerical linear algebra functions like the matrix-matrix product, solution of linear systems, QR and Cholesky decomposition, etc.
|-
| [https://github.com/forrestv/pyderiv pyderiv]
| {{GPL-lic}}
| OO
| automatic differentiation and (co)variance calculation
|-
| [http://www.casadi.org/ CasADi]
| [[LGPL]]
| SCT
| Python front-end to CasADi. Forward/reverse modes, matrix-valued atomic operations.
|-
| [http://deeplearning.net/software/theano/ Theano]
| {{BSD-lic}}
| OO
| [[Theano (software)|Theano]] is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays both on CPU and GPU efficiently.
|-
| [http://github.com/hips/autograd Autograd]
| [[MIT License|MIT]]
| OO
| Autograd can reverse-mode differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures. It is closed under its own operation and hence can compute derivatives of any order.
|}
 
* '''Lua'''
:{| class="wikitable" border=1
|-
! Package
! License
! Approach
! Brief info
|-
| [http://torch.ch Torch]
| {{BSD-lic}}
| OO
| [[Torch (machine learning)|Torch]] is a [[LuaJIT]] library used for [[Deep Learning]]. Its [[Torch (machine learning)#nn|nn]] package is divided into modular objects that share a common <code>Module</code> interface. Modules have a <code>forward</code> and <code>backward</code> function that allow them to [[Feedforward_neural_network|feedforward]] and [[backpropagation|backpropagate]] (first-order derivatives). Modules can be joined together using module [[Composite_pattern|composites]] to create complex task-tailored graphs.
|-
| [http://www.scilua.org SciLua]
| [[MIT License|MIT]]
| OO
| SciLua, a framework for general purpose scientific computing in [[LuaJIT]], features [http://www.scilua.org/sci_diff.html complete and transparent support] for forward-mode automatic differentiation.
|}
 
* '''.NET'''
:{| class="wikitable" border=1
|-
! Package
! License
! Approach
! Brief info
|-
| [http://autodiff.codeplex.com/ AutoDiff]
| {{LGPL-lic}}
| OO
| Automatic differentiation with C# operator overloading.
|-
| [http://funclib.codeplex.com/ FuncLib]
| [[MIT License|MIT]]
| OO
| Automatic differentiation and numerical optimization, operator overloading, unlimited order of differentiation, compilation to IL code for very fast evaluation.
|-
| [http://diffsharp.github.io/DiffSharp/ DiffSharp]
| {{LGPL-lic}}
| OO
| An automatic differentiation library implemented in the [[F_Sharp_(programming_language)|F#]] language. It supports [[C_Sharp_(programming_language)|C#]] and the other CLI languages. The library provides gradients, Hessians, Jacobians, directional derivatives, and matrix-free Hessian- and Jacobian-vector products, which can be incorporated with minimal change into existing algorithms. Operations can be nested to any level, meaning that you can compute exact higher-order derivatives and differentiate functions that are internally making use of differentiation.
|}
 
* '''Haskell'''
:{| class="wikitable" border=1
|-
! Package
! License
! Approach
! Brief info
|-
| [http://hackage.haskell.org/package/ad ad]
| {{BSD-lic}}
| OO
| Forward Mode (1st derivative or arbitrary order derivatives via lazy lists and sparse tries)<br>Reverse Mode<br>Combined forward-on-reverse Hessians. <br> Uses Quantification to allow the implementation automatically choose appropriate modes.<br> Quantification prevents perturbation/sensitivity confusion at compile time.
|-
| [http://hackage.haskell.org/package/fad fad]
| {{BSD-lic}}
| OO
| Forward Mode (lazy list). Quantification prevents perturbation confusion at compile time.
|-
| [http://hackage.haskell.org/package/rad rad]
| {{BSD-lic}}
| OO
| Reverse Mode. (Subsumed by 'ad'). <br> Quantification prevents sensitivity confusion at compile time.
|}
 
* '''Java'''
:{| class="wikitable" border=1
|-
! Package
! License
! Approach
! Brief info
|-
| [https://github.com/uniker9/JAutoDiff JAutoDiff]
| [[MIT License|MIT]]
| OO
| Provides a framework to compute derivatives of functions on arbitrary types of field using generics. Coded in 100% pure Java.
|-
| [http://commons.apache.org/proper/commons-math Apache Commons Math]
| [[Apache License]] v2
| OO
| This class is an implementation of the extension to Rall's numbers described in Dan Kalman's paper<ref>{{cite journal|last=Kalman|first=Dan|title=Doubly Recursive Multivariate Automatic Differentiation|journal=Mathematics Magazine|date=June 2002|volume=75|issue=3|pages=187–202|url=http://www.math.american.edu/People/kalman/pdffiles/mmgautodiff.pdf|doi=10.2307/3219241}}</ref>
|
|-
| [https://github.com/lambder/Deriva Deriva]
| [[Eclipse Public License]] v1.0
| DSL+Code Generation
| Deriva automates algorithmic differentiation in Java and Clojure projects. It defines DSL for building extended arithmetic expressions (the extension being support for conditionals, allowing to express non analytic functions). The DSL is used to generate flat byte-code at runtime, providing implementation without overhead of function calls.
|
|-
| [http://link.springer.com/chapter/10.1007/978-3-642-30023-3_22 Jap]
| [[Public]]
| OO/SCT
| Jap is a tools using Virtual Operator Overloading for java class. Jap was developed in the [http://hal.archives-ouvertes.fr/docs/00/65/66/25/PDF/pham-quang_these.pdf thesis] of Phuong PHAM-QUANG 2008-2011.
|}
 
* '''[[Julia (programming language)|Julia]]'''
:{| class="wikitable" border=1
|-
! Package
! License
! Approach
! Brief info
|-
| [https://github.com/JuliaDiff/ForwardDiff.jl ForwardDiff.jl]
| [[MIT License|MIT]]
| OO
| A unified package for forward-mode automatic differentiation, combining both DualNumbers and vector-based gradient accumulations.
|-
| [https://github.com/JuliaDiff/DualNumbers.jl DualNumbers.jl]
| [[MIT License|MIT]]
| OO
| Implements a Dual number type which can be used for forward-mode automatic differentiation of first derivatives via operator overloading.
|-
| [https://github.com/JuliaDiff/HyperDualNumbers.jl HyperDualNumers.jl]
| [[MIT License|MIT]]
| OO
| Implements a Hyper number type which can be used for forward-mode automatic differentiation of first and second derivatives via operator overloading.
|-
| [https://github.com/JuliaDiff/ReverseDiffSource.jl ReverseDiffSource.jl]
| [[MIT License|MIT]]
| SCT
| Implements reverse-mode automatic differentiation for gradients and high-order derivatives given user-supplied expressions or generic functions. Accepts a subset of valid Julia syntax, including intermediate assignments.
|-
| [https://github.com/JuliaDiff/TaylorSeries.jl TaylorSeries.jl]
| [[MIT License|MIT]]
| OO
| Implements truncated multivariate power series for high-order integration of ODEs and forward-mode automatic differentiation of arbitrary order derivatives via operator overloading.
|}
 
* '''Clojure'''
:{| class="wikitable" border=1
|-
! Package
! License
! Approach
! Brief info
|-
| [https://github.com/lambder/Deriva Deriva]
| [[Eclipse Public License]] v1.0
| DSL+Code Generation
| Deriva automates algorithmic differentiation in Java and Clojure projects. It defines DSL for building extended arithmetic expressions (the extension being support for conditionals, allowing to express non analytic functions). The DSL is used to generate flat byte-code at runtime, providing implementation without overhead of function calls.
|}
 
* '''[https://golang.org/ Golang]'''
:{| class="wikitable" border=1
|-
! Package
! License
! Approach
! Brief info
|-
| [https://github.com/xoba/ad ad]
| [http://www.apache.org/licenses/LICENSE-2.0 Apache 2.0]
| DSL+Code Generation
| Creates source code corresponding to algebraic expressions. See https://autodiff.info for a demo.
|}
 
* '''[https://cran.r-project.org/ R]'''
:{| class="wikitable" border=1
|-
! Package
! License
! Approach
! Brief info
|-
| [https://cran.r-project.org/web/packages/PBSadmb/index.html PBSadmb]
| {{GPL-lic}}
| SCT+OO
| Uses a template approach. See http://admb-project.org/ .
|}
 
==References==
{{Reflist|30em}}
 
== Further reading ==
{{reflist}}
 
== Literature ==
 
* {{cite book
| last = Rall
Line 650 ⟶ 434:
| volume = 120
| year = 1981
| isbn = 978-3-540-10861-0
}}
* {{cite book
|last1 | last1 = Griewank
|first1 | first1 = Andreas
|last2 | last2 = Walther
|first2 = Andrea | first2author2-link = Andrea Walther
|title | title = Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation
|edition | edition = 2nd
|publisher | publisher = [[Society for Industrial and Applied Mathematics|SIAM]]
|series | series = Other Titles in Applied Mathematics
|volume | volume = 105
|year | year = 2008
|doi = 10.1137/1.9780898717761
| isbn = 978-0-89871-659-7
|isbn = 978-0-89871-659-7
| url = http://www.ec-securehost.com/SIAM/OT105.html
|url = https://epubs.siam.org/doi/book/10.1137/1.9780898717761 <!--
<!-- | accessdate = 10-21-2009 -->
|access-date = 2009-10-21
|archive-url = https://web.archive.org/web/20100323035649/http://www.ec-securehost.com/SIAM/OT105.html
|archive-date = 2010-03-23
|url-status = dead
-->
}}
* {{cite journal
Line 677 ⟶ 466:
|doi=10.1137/080743627
|url=http://academics.davidson.edu/math/neidinger/SIAMRev74362.pdf
|accessdateaccess-date=2013-03-15
|citeseerx=10.1.1.362.6580
|s2cid=17134969
}}
* {{cite book
| last = Naumann
| first = Uwe
| title = The Art of Differentiating Computer Programs
| publisher = [[Society for Industrial and Applied Mathematics|SIAM]]
| series = Software-Environments-tools
| year = 2012
| isbn = 978-1-611972-06-1
}}
* {{cite book
| last = Henrard
| first = Marc
| title = Algorithmic Differentiation in Finance Explained
| publisher = [[Palgrave Macmillan]]
| series = Financial Engineering Explained
| year = 2017
| isbn = 978-3-319-53978-2
}}
 
Line 683 ⟶ 492:
* [http://www.autodiff.org/ www.autodiff.org], An "entry site to everything you want to know about automatic differentiation"
* [http://www.autodiff.org/?module=Applications&application=HC1 Automatic Differentiation of Parallel OpenMP Programs]
* [httphttps://citeseerxwww.istresearchgate.psu.edunet/viewdocpublication/download?doi=10.1.1.89.7749&rep=rep1&type=pdf241730000_Automatic_Differentiation_C_Templates_and_Photogrammetry Automatic Differentiation, C++ Templates and Photogrammetry]
* [https://web.archive.org/web/20070927120356/http://www.vivlabs.com/subpage_ad.php Automatic Differentiation, Operator Overloading Approach]
* [http://tapenade.inria.fr:8080/tapenade/index.jsp Compute analytic derivatives of any Fortran77, Fortran95, or C program through a web-based interface] Automatic Differentiation of Fortran programs
* [http://www.win-vector.com/dfiles/AutomaticDifferentiationWithScala.pdf Description and example code for forward Automatic Differentiation in Scala] {{Webarchive|url=https://web.archive.org/web/20160803214549/http://www.win-vector.com/dfiles/AutomaticDifferentiationWithScala.pdf |date=2016-08-03 }}
* [https://www.finmath.net/finmath-lib/concepts/stochasticautomaticdifferentiation/ finmath-lib stochastic automatic differentiation], Automatic differentiation for random variables (Java implementation of the stochastic automatic differentiation).
* [http://developers.opengamma.com/quantitative-research/Adjoint-Algorithmic-Differentiation-OpenGamma.pdf Adjoint Algorithmic Differentiation: Calibration and Implicit Function Theorem]
* [https://web.archive.org/web/20140423121504/http://developers.opengamma.com/quantitative-research/Adjoint-Algorithmic-Differentiation-OpenGamma.pdf Adjoint Algorithmic Differentiation: Calibration and Implicit Function Theorem]
* [http://www.nag.co.uk/doc/techrep/pdf/tr5_10.pdf], Exact First- and Second-Order Greeks by Algorithmic Differentiation
* [http://www.quantandfinancial.com/2017/02/automatic-differentiation-templated.html C++ Template-based automatic differentiation article] and [https://github.com/omartinsky/QuantAndFinancial/tree/master/autodiff_templated implementation]
* [http://www.nag.co.uk/Market/articles/adjoint-algorithmic-differentiation-of-gpu-accelerated-app.pdf], Adjoint Algorithmic Differentiation of a GPU Accelerated Application
* [https://github.com/google/tangent Tangent] [https://research.googleblog.com/2017/11/tangent-source-to-source-debuggable.html Source-to-Source Debuggable Derivatives]
* [http://www.nag.co.uk/Market/seminars/Uwe_AD_Slides_July13.pdf], Adjoint Methods in Computational Finance Software Tool Support for Algorithmic Differentiation
* [http://www.nag.co.uk/doc/techrep/pdf/tr5_10.pdf Exact First- and Second-Order Greeks by Algorithmic Differentiation]
* [http://www.nag.co.uk/Market/articles/adjoint-algorithmic-differentiation-of-gpu-accelerated-app.pdf Adjoint Algorithmic Differentiation of a GPU Accelerated Application]
* [http://www.nag.co.uk/Market/seminars/Uwe_AD_Slides_July13.pdf Adjoint Methods in Computational Finance Software Tool Support for Algorithmic Differentiationop]
* [https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xva-pricing-application-financial-services-white-papers.pdf More than a Thousand Fold Speed Up for xVA Pricing Calculations with Intel Xeon Scalable Processors]
* [https://github.com/ExcessPhase/ctaylor Sparse truncated Taylor series implementation with VBIC95 example for higher order derivatives]
 
{{Differentiable computing}}
{{Authority control}}
 
{{DEFAULTSORT:Automatic Differentiation}}
[[Category:Differential calculus]]
[[Category:Computer algebra]]
[[Category:Articles with example pseudocode]]
[[Category:Articles with example Python (programming language) code]]
[[Category:Articles with example C++ code]]