Personalized Algorithm Generation: A Case Study in Learning ODE Integrators

@article{Guo2022PersonalizedAG,
  title={Personalized Algorithm Generation: A Case Study in Learning ODE Integrators},
  author={Yue Guo and Felix Dietrich and Tom S. Bertalan and Danimir T. Doncevic and Manuel Dahmen and Ioannis G. Kevrekidis and Qianxiao Li},
  journal={SIAM J. Sci. Comput.},
  year={2022},
  volume={44},
  pages={1911-}
}
We study the learning of numerical algorithms for scientific computing, which combines mathematically driven, handcrafted design of general algorithm structure with a data-driven adaptation to specific classes of tasks. This represents a departure from the classical approaches in numerical analysis, which typically do not feature such learning-based adaptations. As a case study, we develop a machine learning approach that automatically learns effective solvers for initial value problems in the… 
2 Citations

Sixth Order Numerov-Type Methods with Coefficients Trained to Perform Best on Problems with Oscillating Solutions

Numerov-type methods using four stages per step and sharing sixth algebraic order are considered. The coefficients of such methods are depended on two free parameters. For addressing problems with

Evolutionary Derivation of Runge–Kutta Pairs of Orders 5(4) Specially Tuned for Problems with Periodic Solutions

The purpose of the present work is to construct a new Runge–Kutta pair of orders five and four to outperform the state-of-the-art in these kind of methods when addressing problems with periodic

References

SHOWING 1-10 OF 78 REFERENCES

Neural Ordinary Differential Equations

TLDR
This work shows how to scalably backpropagate through any ODE solver, without access to its internal operations, which allows end-to-end training of ODEs within larger models.

Optimal deterministic algorithm generation

TLDR
A combination of brute force and general-purpose deterministic global algorithms is employed to guarantee the optimality of the algorithm devised to overcome the multimodality arising from nonconvexity in the optimization problem.

and a at

The xishacorene natural products are structurally unique apolar diterpenoids that feature a bicyclo[3.3.1] framework. These secondary metabolites likely arise from the well-studied, structurally

Revisiting "Qualitatively Characterizing Neural Network Optimization Problems"

TLDR
It is concluded that, although Goodfellow et al.'s findings describe the "relatively easy to optimize" MNIST setting, behavior is qualitatively different in modern settings.

Multi-Task Learning with Deep Neural Networks: A Survey

TLDR
An overview of multi-task learning methods for deep neural networks is given, with the aim of summarizing both the well-established and most recent directions within the field.

Inverse modified differential equations for discovery of dynamics

TLDR
Inverse modified differential equations (IMDEs) are introduced to contribute to the fundamental theory of discovery of dynamics and make clear the behavior of parameterizing some blocks in neural ODEs.

Pruning neural networks without any data by iteratively conserving synaptic flow

TLDR
The data-agnostic pruning algorithm challenges the existing paradigm that, at initialization, data must be used to quantify which synapses are important, and consistently competes with or outperforms existing state-of-the-art pruning algorithms at initialization over a range of models, datasets, and sparsity constraints.

A review on superstructure optimization approaches in process system engineering

Identifying Critical Neurons in ANN Architectures using Mixed Integer Programming

TLDR
The proposed formulation of the MIP generalizes the recently considered lottery ticket optimization by identifying multiple "lucky" sub-networks resulting in optimized architecture that not only performs well on a single dataset, but also generalizes across multiple ones upon retraining of network weights.
...