• Corpus ID: 232307907

AutoSpace: Neural Architecture Search with Less Human Interference

@article{Zhou2021AutoSpaceNA,
  title={AutoSpace: Neural Architecture Search with Less Human Interference},
  author={Daquan Zhou and Xiaojie Jin and Xiaochen Lian and Linjie Yang and Yujing Xue and Qibin Hou and Jiashi Feng},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.11833}
}
Current neural architecture search (NAS) algorithms still require expert knowledge and effort to design a search space for network construction. In this paper, we consider automating the search space design to minimize human interference, which however faces two challenges: the explosive complexity of the exploration space and the expensive computation cost to evaluate the quality of different search spaces. To solve them, we propose a novel differentiable evolutionary framework named AutoSpace… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 49 REFERENCES
Evaluating the Search Phase of Neural Architecture Search
TLDR
This paper finds that on average, the state-of-the-art NAS algorithms perform similarly to the random policy; the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process.
Efficient Neural Architecture Search via Parameter Sharing
TLDR
Efficient Neural Architecture Search is a fast and inexpensive approach for automatic model design that establishes a new state-of-the-art among all methods without post-training processing and delivers strong empirical performances using much fewer GPU-hours.
Regularized Evolution for Image Classifier Architecture Search
TLDR
This work evolves an image classifier---AmoebaNet-A---that surpasses hand-designs for the first time and gives evidence that evolution can obtain results faster with the same hardware, especially at the earlier stages of the search.
PC-DARTS: Partial Channel Connections for Memory-Efficient Differentiable Architecture Search
TLDR
This paper presents a novel approach, namely, Partially-Connected DARTS, by sampling a small part of super-network to reduce the redundancy in exploring the network space, thereby performing a more efficient search without comprising the performance.
Neural Architecture Search with Reinforcement Learning
TLDR
This paper uses a recurrent network to generate the model descriptions of neural networks and trains this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set.
Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective
TLDR
This work proposes a novel framework called training-free neural architecture search (TE-NAS), which ranks architectures by analyzing the spectrum of the neural tangent kernel (NTK) and the number of linear regions in the input space and shows that these two measurements imply the trainability and expressivity of a neural network.
Progressive Neural Architecture Search
We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary
FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions
TLDR
This work proposes a memory and computationally efficient DNAS variant, DMaskingNAS, that expands the search space by up to 10^14x over conventional DNAS, supporting searches over spatial and channel dimensions that are otherwise prohibitively expensive: input resolution and number of filters.
DARTS: Differentiable Architecture Search
TLDR
The proposed algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.
Large-Scale Evolution of Image Classifiers
TLDR
It is shown that it is now possible to evolve models with accuracies within the range of those published in the last year, starting from trivial initial conditions and reaching accuracies of 94.6% and 77.0%, respectively.
...
1
2
3
4
5
...