Programming with neural surrogates of programs

@article{Renda2021ProgrammingWN,
  title={Programming with neural surrogates of programs},
  author={Alex Renda and Yi Ding and Michael Carbin},
  journal={Proceedings of the 2021 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software},
  year={2021}
}
  • Alex Renda, Yi Ding, Michael Carbin
  • Published 17 October 2021
  • Computer Science
  • Proceedings of the 2021 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software
Surrogates, models that mimic the behavior of programs, form the basis of a variety of development workflows. We study three surrogate-based design patterns, evaluating each in case studies on a large-scale CPU simulator. With surrogate compilation, programmers develop a surrogate that mimics the behavior of a program to deploy to end-users in place of the original program. Surrogate compilation accelerates the CPU simulator under study by 1.6×. With surrogate adaptation, programmers develop a… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 174 REFERENCES
NEUZZ: Efficient Fuzzing with Neural Program Smoothing
TLDR
A novel program smoothing technique using surrogate neural network models that can incrementally learn smooth approximations of a complex, real-world program's branching behaviors is proposed and used together with gradient-guided input generation schemes to significantly increase the efficiency of the fuzzing process.
PyTorch: An Imperative Style, High-Performance Deep Learning Library
TLDR
This paper details the principles that drove the implementation of PyTorch and how they are reflected in its architecture, and explains how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance.
Transfer-Learning: Bridging the Gap between Real and Simulation Data for Machine Learning in Injection Molding
TLDR
This work evaluated two different approaches based on artificial neural networks, namely soft-start and random initialization, in a real injection molding process, and showed better learning rates and predictions that are more accurate while using fewer experimental data.
Towards Automated Construction of Compiler Optimizations
  • Ph.D. Thesis. Massachusetts Institute of Technology,
  • 2020
Transfer Learning as a Tool for Reducing Simulation Bias: Application to Inertial Confinement Fusion
We adopt a technique, known in the machine learning community as transfer learning, to reduce the bias of computer simulation using very sparse experimental data. Unlike the Bayesian calibration,
Transfer Learning for Design-Space Exploration with High-Level Synthesis
  • Jihye Kwon, L. Carloni
  • Computer Science
    2020 ACM/IEEE 2nd Workshop on Machine Learning for CAD (MLCAD)
  • 2020
TLDR
A novel neural network model is developed that outperforms both single-domain and hard-sharing models in predicting the performance and cost at early stages of HLS-driven DSE and reuses the knowledge obtained from previously explored design spaces in exploring a new target design space.
Deep Probabilistic Surrogate Networks for Universal Simulator Approximation
TLDR
Employing the surrogate modeling technique makes inference an order of magnitude faster, opening up the possibility of doing simulator-based, non-invasive, just-in-time parts quality testing; in this case inferring safety-critical latent internal temperature profiles of composite materials undergoing curing from surface temperature profile measurements.
Imitation-Projected Programmatic Reinforcement Learning
TLDR
The experiments show that PROPEL can significantly outperform state-of-the-art approaches for learning programmatic policies and exploit contemporary combinatorial methods for this task.
Imitation-Projected Programmatic Reinforcement Learning. In Advances in Neural Information Processing Systems. 37 Programming with Neural Surrogates of Programs Onward! ’21
  • October 20ś22,
  • 2019
Attention is All you Need
TLDR
A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
...
1
2
3
4
5
...