• Corpus ID: 248177810

Surrogate NAS Benchmarks: Going Beyond the Limited Search Spaces of Tabular NAS Benchmarks

@inproceedings{Zela2022SurrogateNB,
  title={Surrogate NAS Benchmarks: Going Beyond the Limited Search Spaces of Tabular NAS Benchmarks},
  author={Arber Zela and Julien N. Siems and Lucas Zimmer and Jovita Lukasik and Margret Keuper and Frank Hutter},
  booktitle={ICLR},
  year={2022}
}
The most significant barrier to the advancement of Neural Architecture Search (NAS) is its demand for large computational resources, which hinders scientifically sound empirical evaluations of NAS methods. Tabular NAS benchmarks have alleviated this problem substantially, making it possible to properly evaluate NAS methods in seconds on commodity machines. However, an unintended consequence of tabular NAS benchmarks has been a focus on extremely small architectural search spaces since their… 

Neural Architecture Search as Multiobjective Optimization Benchmarks: Problem Formulation and Performance Assessment

TLDR
This work forms NAS tasks into general multi-objective optimization problems and analyzes the complex characteristics from an optimization point of view, and presents an end-to-end pipeline, dubbed EvoXBench, to generate benchmark test problems for EMO algorithms to run without the requirement of GPUs or Pytorch/TensorFlow.

IGWO-SS: Improved Grey Wolf Optimization Based on Synaptic Saliency for Fast Neural Architecture Search in Computer Vision

TLDR
This study proposes an Improved Grey Wolf Optimization based on Synaptic Saliency (IGWO-SS), which is much faster than the existing NAS algorithms and provides better final performance and indicates that the synaptic saliency of an untrained neural network positively correlates with its final accuracy.

Evolution of neural networks

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial

References

SHOWING 1-10 OF 105 REFERENCES

Local Search is State of the Art for NAS Benchmarks

TLDR
A thorough theoretical and empirical study explains the success of local search on smaller, structured search spaces, and shows that the simplest local search instantiations achieve state-of-the-art results on the most popular existing NAS benchmarks.

NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size

TLDR
NATS-Bench is proposed, a unified benchmark on searching for both topology and size, for (almost) any up-to-date NAS algorithm and facilitates a much larger community of researchers to focus on developing better NAS algorithms in a more comparable and computationally effective environment.

NAS-Bench-101: Towards Reproducible Neural Architecture Search

TLDR
This work introduces NAS-Bench-101, the first public architecture dataset for NAS research, which allows researchers to evaluate the quality of a diverse range of models in milliseconds by querying the pre-computed dataset.

NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search

TLDR
A general framework for one-shot NAS that can be instantiated to many recently-introduced variants and a general benchmarking framework that draws on the recent large-scale tabular benchmark NAS-Bench-101 for cheap anytime evaluations of one- shot NAS methods are introduced.

NAS evaluation is frustratingly hard

TLDR
This work proposes using a method’s relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols to overcome the hurdle of comparing methods with different search spaces.

Meta-Surrogate Benchmarking for Hyperparameter Optimization

TLDR
This work proposes a method to alleviate issues by means of a meta-surrogate model for HPO tasks trained on off-line generated data that combines a probabilistic encoder with a multi-task model such that it can generate inexpensive and realistic tasks of the class of problems of interest.

Tabular Benchmarks for Joint Architecture and Hyperparameter Optimization

TLDR
The goal of this paper is to facilitate a better empirical evaluation of HPO methods by providing benchmarks that are cheap to evaluate, but still represent realistic use cases, and exhaustively compared various different state-of-the-art methods from the hyperparameter optimization literature on these benchmarks.

Random Search and Reproducibility for Neural Architecture Search

TLDR
This work proposes new NAS baselines that build off the following observations: (i) NAS is a specialized hyperparameter optimization problem; and (ii) random search is a competitive baseline for hyperparameters optimization.

HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark

TLDR
HW-NAS-Bench is developed, the first public dataset for HW-NAS research which aims to democratize HW- NAS research to non-hardware experts and make HW-NA research more reproducible and accessible and verify that dedicated device-specific HW- Nas can indeed lead to optimal accuracy-cost trade-offs.
...