• Corpus ID: 250311964

NAS-Bench-360: Benchmarking Neural Architecture Search on Diverse Tasks

  title={NAS-Bench-360: Benchmarking Neural Architecture Search on Diverse Tasks},
  author={Renbo Tu and Nicholas Roberts and Mikhail Khodak and Jun Shen and Frederic Sala and Ameet S. Talwalkar},
Most existing neural architecture search (NAS) benchmarks and algorithms prioritize well-studied tasks, e.g. image classification on CIFAR or ImageNet. This makes the performance of NAS approaches in more diverse areas poorly understood. In this paper, we present NAS-Bench-360, a benchmark suite to evaluate methods on domains beyond those traditionally studied in architecture search, and use it to address the following question: do state-of-the-art NAS methods perform well on diverse tasks? To… 


NAS-Bench-101: Towards Reproducible Neural Architecture Search
This work introduces NAS-Bench-101, the first public architecture dataset for NAS research, which allows researchers to evaluate the quality of a diverse range of models in milliseconds by querying the pre-computed dataset.
NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search
This work proposes an extension to NAS-bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information, which provides additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.
Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective
This work proposes a novel framework called training-free neural architecture search (TE-NAS), which ranks architectures by analyzing the spectrum of the neural tangent kernel (NTK) and the number of linear regions in the input space and shows that these two measurements imply the trainability and expressivity of a neural network.
NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search
A general framework for one-shot NAS that can be instantiated to many recently-introduced variants and a general benchmarking framework that draws on the recent large-scale tabular benchmark NAS-Bench-101 for cheap anytime evaluations of one- shot NAS methods are introduced.
NAS-Bench-NLP: Neural Architecture Search Benchmark for Natural Language Processing
This work step outside the computer vision domain by leveraging the language modeling task, which is the core of natural language processing (NLP), and considers that the benchmark will provide more reliable empirical findings in the community and stimulate progress in developing new NAS methods well suited for recurrent architectures.
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
ProxylessNAS is presented, which can directly learn the architectures for large-scale target tasks and target hardware platforms and apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design.
Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation
This paper presents a network level search space that includes many popular designs, and develops a formulation that allows efficient gradient-based architecture search and demonstrates the effectiveness of the proposed method on the challenging Cityscapes, PASCAL VOC 2012, and ADE20K datasets.
DARTS: Differentiable Architecture Search
The proposed algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.
Neural Architecture Search without Training
This work examines how the linear maps induced by data points correlate for untrained network architectures in the NAS-Bench-201 search space, and motivates how this can be used to give a measure of modelling flexibility which is highly indicative of a network's trained performance.
Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
This work proposes Meta-Dataset: a new benchmark for training and evaluating models that is large-scale, consists of diverse datasets, and presents more realistic tasks, and proposes a new set of baselines for quantifying the benefit of meta-learning in Meta- Dataset.