Corpus ID: 237532763

How to Simplify Search: Classification-wise Pareto Evolution for One-shot Neural Architecture Search

@article{Ma2021HowTS,
  title={How to Simplify Search: Classification-wise Pareto Evolution for One-shot Neural Architecture Search},
  author={Lianbo Ma and Nan Li and Guo Yu and Xiao Geng and Min Huang and Xingwei Wang},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.07582}
}
  • Lianbo Ma, Nan Li, +3 authors Xingwei Wang
  • Published 14 September 2021
  • Computer Science
  • ArXiv
In the deployment of deep neural models, how to effectively and automatically find feasible deep models under diverse design objectives is fundamental. Most existing neural architecture search (NAS) methods utilize surrogates to predict the detailed performance (e.g., accuracy and model size) of a candidate architecture during the search, which however is complicated and inefficient. In contrast, we aim to learn an efficient Pareto classifier to simplify the search process of NAS by… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 61 REFERENCES
One-Shot Neural Architecture Search: Maximising Diversity to Overcome Catastrophic Forgetting
TLDR
The experiments on the common NAS search space demonstrate that NSAS and it variants improve the predictive ability of supernet training in one-shot NAS with remarkable and efficient performance on the CIFAR-10, CIFar-100, and ImageNet datasets. Expand
DPP-Net: Device-aware Progressive Search for Pareto-optimal Neural Architectures
TLDR
DPP-Net is proposed: Device-aware Progressive Search for Pareto-optimal Neural Architectures, optimizing for both device-related and device-agnostic objectives, which achieves better performances: higher accuracy & shorter inference time on various devices. Expand
CARS: Continuous Evolution for Efficient Neural Architecture Search
  • Zhaohui Yang, Yunhe Wang, +5 authors Chang Xu
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
This work develops an efficient continuous evolutionary approach for searching neural networks that provides a series of networks with the number of parameters ranging from 3.7M to 5.1M under mobile settings and surpasses those produced by the state-of-the-art methods on the benchmark ImageNet dataset. Expand
Efficient Neural Architecture Search via Parameter Sharing
TLDR
Efficient Neural Architecture Search is a fast and inexpensive approach for automatic model design that establishes a new state-of-the-art among all methods without post-training processing and delivers strong empirical performances using much fewer GPU-hours. Expand
Efficient Evolutionary Search of Attention Convolutional Networks via Sampled Training and Node Inheritance
TLDR
This article proposes a computationally efficient framework for the evolutionary search of convolutional networks based on a directed acyclic graph, in which parents are randomly sampled and trained on each mini-batch of training data, and encode a channel attention mechanism in the search space to enhance the feature processing capability of the evolved neural networks. Expand
Regularized Evolution for Image Classifier Architecture Search
TLDR
This work evolves an image classifier---AmoebaNet-A---that surpasses hand-designs for the first time and gives evidence that evolution can obtain results faster with the same hardware, especially at the earlier stages of the search. Expand
Progressive Neural Architecture Search
We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionaryExpand
NEMO : Neuro-Evolution with Multiobjective Optimization of Deep Neural Network for Speed and Accuracy
TLDR
Automatic machine learning approach of deep neural networks (DNNs) using multi-objective evolutionary algorithms (MOEAs) for the accuracy and run-time speed simultaneously is proposed using Neuro-Evolution with Multiobjective Optimization (NEMO). Expand
Deep Neural Architecture Search with Deep Graph Bayesian Optimization
  • Lizheng Ma, Jiaxu Cui, Bo Yang
  • Computer Science, Mathematics
  • 2019 IEEE/WIC/ACM International Conference on Web Intelligence (WI)
  • 2019
TLDR
A Bayesian graph neural network is proposed as a new surrogate, which can automatically extract features from deep neural architectures, and use such learned features to fit and characterize black-box objectives and their uncertainty. Expand
A Classification-Based Surrogate-Assisted Evolutionary Algorithm for Expensive Many-Objective Optimization
TLDR
A surrogate-assisted many-objective evolutionary algorithm that uses an artificial neural network to predict the dominance relationship between candidate solutions and reference solutions instead of approximating the objective values separately is proposed. Expand
...
1
2
3
4
5
...