Corpus ID: 229642122

Apollo: Transferable Architecture Exploration

@article{Yazdanbakhsh2021ApolloTA,
  title={Apollo: Transferable Architecture Exploration},
  author={A. Yazdanbakhsh and Christof Angerm{\"u}ller and Berkin Akin and Yanqi Zhou and A. Jones and Milad Hashemi and Kevin Swersky and S. Chatterjee and Ravi Narayanaswami and J. Laudon},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.01723}
}
The looming end of Moore’s Law and ascending use of deep learning drives the design of custom accelerators that are optimized for specific neural architectures. Architecture exploration for such accelerators forms a challenging constrained optimization problem over a complex, high-dimensional, and structured input space with a costly to evaluate objective function. Existing approaches for accelerator design are sample-inefficient and do not transfer knowledge between related optimizations tasks… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 44 REFERENCES
Timeloop: A Systematic Approach to DNN Accelerator Evaluation
  • 57
  • PDF
Automated Accelerator Generation and Optimization with Composable, Parallel and Pipeline Architecture
  • 28
  • PDF
Spatial: a language and compiler for application accelerators
  • 60
  • PDF
Bayesian Multi-objective Hyperparameter Optimization for Accurate, Fast, and Efficient Neural Network Accelerator Design
  • 6
Accelerator-aware Neural Network Design using AutoML
  • 12
  • PDF
OpenTuner: An extensible framework for program autotuning
  • 345
  • PDF
Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization
  • 7
FlexiBO: Cost-Aware Multi-Objective Optimization of Deep Neural Networks
  • 6
  • PDF
A flexible transfer learning framework for Bayesian optimization with convergence guarantee
  • 15
...
1
2
3
4
5
...