Difference between revisions of "Automatic Differentiation"

From HaskellWiki
Jump to navigation Jump to search
(short explanation of automatic differentation)
m
Line 3: Line 3:
 
Let the number <math>x_0</math> be equipped with the derivative <math>x_1</math>: <math>\langle x_0,x_1 \rangle</math>.
 
Let the number <math>x_0</math> be equipped with the derivative <math>x_1</math>: <math>\langle x_0,x_1 \rangle</math>.
 
For example the sinus is defined as:
 
For example the sinus is defined as:
* \sin\langle x_0,x_1 \rangle = \langle \sin x_0, x_1\cdot\cos x_0\rangle
+
* <math>\sin\langle x_0,x_1 \rangle = \langle \sin x_0, x_1\cdot\cos x_0\rangle</math>
 
You see, that's just estimating errors as in physics.
 
You see, that's just estimating errors as in physics.
 
However, it becomes more interesting for vector functions.
 
However, it becomes more interesting for vector functions.

Revision as of 23:23, 4 April 2009

Automatic Differentiation roughly means that a numerical value is equipped with a derivative part, which is updated accordingly on every function application. Let the number be equipped with the derivative : . For example the sinus is defined as:

You see, that's just estimating errors as in physics. However, it becomes more interesting for vector functions.

Implementations:

Power Series

You may count arithmetic with power series also as Automatic Differentiation, since this means just working with all derivatives simultaneously.

Implementation with Haskell 98 type classes: http://darcs.haskell.org/htam/src/PowerSeries/Taylor.hs

With advanced type classes in Numeric Prelude: http://hackage.haskell.org/packages/archive/numeric-prelude/0.0.5/doc/html/MathObj-PowerSeries.html

See also