• Corpus ID: 15568746

Who Invented the Reverse Mode of Differentiation

  title={Who Invented the Reverse Mode of Differentiation},
  author={Andreas Griewank},
Nick Trefethen [13] listed automatic differentiation as one of the 30 great numerical algorithms of the last century. He kindly credited the present author with facilitating the rebirth of the key idea, namely the reverse mode. In fact, there have been many incarnations of this reversal technique, which has been suggested by several people from various fields since the late 1960s, if not earlier. Seppo Linnainmaa (Lin76) of Helsinki says the idea came to him on a sunny afternoon in a Copenhagen… 
Algorithm 1005
A set of Fortran subroutines for reverse mode algorithmic differentiation of the basic linear algebra subprograms (BLAS) is presented and comprehensive tables of formulae for the BLAS derivatives as well as for several non-BLAS matrix operations commonly used in optimization are presented.
Mini-symposium on automatic differentiation and its applications in the financial industry
This paper shows how automatic differentiation provides a partial answer to this recent explosion of computation to perform and gives here short introductions to typical cases arising when one use AAD on financial markets.
DiffSharp: Automatic Differentiation Library
DiffSharp aims to make an extensive array of AD techniques available, in convenient form, to the machine learning community, including arbitrary nesting of forward/reverse AD operations, AD with linear algebra primitives, and a functional API that emphasizes the use of higher-order functions and composition.
Automatic differentiation in machine learning: a survey
By precisely defining the main differentiation techniques and their interrelationships, this work aims to bring clarity to the usage of the terms “autodiff’, “automatic differentiation”, and “symbolic differentiation" as these are encountered more and more in machine learning settings.
An introduction to algorithmic differentiation
This work provides an introduction to AD and presents its basic ideas and techniques, some of its most important results, the implementation paradigms it relies on, the connection it has to other domains including machine learning and parallel computing, and a few of the major open problems in the area.
The methods and applications of automatic differentiation are reviewed, a research and development activity, which has evolved in various computational fields since the mid 1950's and also facilitates the treatment of nonsmooth problems by piecewise linearization.
Provably Correct Automatic Subdifferentiation for Qualified Programs
The main result shows that, under certain restrictions on the library of non-smooth functions, provably correct generalized sub-derivatives can be computed at a computational cost that is within a (dimension-free) factor of $6$ of the cost of computing the scalar function itself.
Polyhedral DC Decomposition and DCA Optimization of Piecewise Linear Functions
It is shown how f ˇ and f ^ can be expressed as a single maximum and a single minimum of affine functions, respectively, and one can ensure finite convergence to a local minimizer of f, provided the Linear Independence Kink Qualification holds.
Differentiable Visual Computing
This dissertation introduces three tools for addressing the challenges that arise when obtaining and applying the derivatives for complex graphics algorithms, and introduces the first general-purpose differentiable ray tracer that solves the full rendering equation, while correctly taking the geometric discontinuities into account.
New Integration Methods for Perturbed ODEs Based on Symplectic Implicit Runge–Kutta Schemes with Application to Solar System Simulations
A family of integrators, flow-composed implicit Runge–Kutta methods, for perturbations of nonlinear ordinary differential equations, consisting of the composition of flows of the unperturbed part alternated with one step of an implicitrunge–kutta (IRK) method applied to a transformed system, with potential application to long-term solar system simulations.


Compiling fast partial derivatives of functions given by algorithms
If the gradient of the function y = f(x/sub 1/,..., x/sub n/) is desired, where f is given by an algoritym Af(x, n, y), most numerical analysts will use numerical differencing. This is a sampling
Evaluating derivatives - principles and techniques of algorithmic differentiation, Second Edition
This second edition has been updated and expanded to cover recent developments in applications and theory, including an elegant NP completeness argument by Uwe Naumann and a brief introduction to scarcity, a generalization of sparsity.
A Fortran precompiler for automatic differentiation and estimates of rounding errors
We are developing a Fortran precompiler named Padre2 which is a tool for automatic differentiation. The precompiler reads Fortran subroutine/function subprograms which compute values of a function
On the discrete adjoints of adaptive time stepping algorithms
It is demonstrated that the discrete adjoint models of one-step, explicit adaptive algorithms, such as the Runge-Kutta schemes, can be made consistent with their continuous counterparts using simple code modifications.
Optimal Jacobian accumulation is NP-complete
  • U. Naumann
  • Mathematics, Computer Science
    Math. Program.
  • 2008
We show that the problem of accumulating Jacobian matrices by using a minimal number of floating-point operations is NP-complete by reduction from Ensemble Computation. The proof makes use of the
Computational differentiation : techniques, applications, and tools
This volume goes beyond the first volume published in 1991 (SIAM) in that it encompasses both the automatic transformation of computer programs as well as the methodologies for the efficient
GlobSol user guide
We explain the installation and use of the GlobSol package for mathematically rigorous bounds on all solutions to constrained and unconstrained global optimization problems, as well as non-linear
Constrained Optimization and Optimal Control for Partial Differential Equations
  • G. Leugering
  • Computer Science
    International series of numerical mathematics
  • 2012
Constrained Optimization, Identification and Control, and Shape and Topology Optimization are applied to model reduction and Discretization.
Logical Reversability of Computation
  • IBM Journal of Research and Development,
  • 1973
Documenta Mathematica · Extra Volume ISMP
  • Documenta Mathematica · Extra Volume ISMP
  • 2012