• Corpus ID: 250311964

NAS-Bench-360: Benchmarking Neural Architecture Search on Diverse Tasks

@inproceedings{Tu2021NASBench360BN,
  title={NAS-Bench-360: Benchmarking Neural Architecture Search on Diverse Tasks},
  author={Renbo Tu and Nicholas Roberts and Mikhail Khodak and Jun Shen and Frederic Sala and Ameet S. Talwalkar},
  year={2021}
}
Most existing neural architecture search (NAS) benchmarks and algorithms prioritize well-studied tasks, e.g. image classification on CIFAR or ImageNet. This makes the performance of NAS approaches in more diverse areas poorly understood. In this paper, we present NAS-Bench-360, a benchmark suite to evaluate methods on domains beyond those traditionally studied in architecture search, and use it to address the following question: do state-of-the-art NAS methods perform well on diverse tasks? To… 

References

SHOWING 1-10 OF 60 REFERENCES

NAS-Bench-101: Towards Reproducible Neural Architecture Search

TLDR
This work introduces NAS-Bench-101, the first public architecture dataset for NAS research, which allows researchers to evaluate the quality of a diverse range of models in milliseconds by querying the pre-computed dataset.

TransNAS-Bench-101: Improving transferability and Generalizability of Cross-Task Neural Architecture Search

TLDR
This work proposes TransNAS-Bench-101, a benchmark dataset containing network performance across seven tasks, covering classification, regression, pixel-level prediction, and self-supervised tasks, and explores two fundamentally different types of search space: cell-level search space and macro- level search space.

NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search

TLDR
This work proposes an extension to NAS-bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information, which provides additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.

NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search

TLDR
A general framework for one-shot NAS that can be instantiated to many recently-introduced variants and a general benchmarking framework that draws on the recent large-scale tabular benchmark NAS-Bench-101 for cheap anytime evaluations of one- shot NAS methods are introduced.

ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware

TLDR
ProxylessNAS is presented, which can directly learn the architectures for large-scale target tasks and target hardware platforms and apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design.

Geometry-Aware Gradient Algorithms for Neural Architecture Search

TLDR
A geometry-aware framework is presented that exploits the underlying structure of this optimization to return sparse architectural parameters, leading to simple yet novel algorithms that enjoy fast convergence guarantees and achieve state-of-the-art accuracy on the latest NAS benchmarks in computer vision.

Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation

TLDR
This paper presents a network level search space that includes many popular designs, and develops a formulation that allows efficient gradient-based architecture search and demonstrates the effectiveness of the proposed method on the challenging Cityscapes, PASCAL VOC 2012, and ADE20K datasets.

DARTS: Differentiable Architecture Search

TLDR
The proposed algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.

Neural Architecture Search without Training

TLDR
This work examines how the linear maps induced by data points correlate for untrained network architectures in the NAS-Bench-201 search space, and motivates how this can be used to give a measure of modelling flexibility which is highly indicative of a network's trained performance.

Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples

TLDR
This work proposes Meta-Dataset: a new benchmark for training and evaluating models that is large-scale, consists of diverse datasets, and presents more realistic tasks, and proposes a new set of baselines for quantifying the benefit of meta-learning in Meta- Dataset.
...