UnrealNAS: Can We Search Neural Architectures with Unreal Data?

@article{Dong2022UnrealNASCW,
  title={UnrealNAS: Can We Search Neural Architectures with Unreal Data?},
  author={Zhen Dong and Kaichen Zhou and G. Li and Qiang Zhou and Mingfei Guo and Bernard Ghanem and Kurt Keutzer and Shanghang Zhang},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.02162}
}
, Abstract. Neural architecture search (NAS) has shown great success in the automatic design of deep neural networks (DNNs). However, the best way to use data to search network architectures is still unclear and under exploration. Previous work ([19,46]) has analyzed the necessity of having ground-truth labels in NAS and inspired broad interest. In this work, we take a further step to question whether real data is necessary for NAS to be effective. The answer to this question is important for… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 48 REFERENCES

NAS evaluation is frustratingly hard

TLDR
This work proposes using a method’s relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols to overcome the hurdle of comparing methods with different search spaces.

Neural Architecture Generator Optimization

TLDR
This work is the first to investigate casting NAS as a problem of finding the optimal network generator and proposes a new, hierarchical and graph-based search space capable of representing an extremely large variety of network types, yet only requiring few continuous hyper-parameters.

ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware

TLDR
ProxylessNAS is presented, which can directly learn the architectures for large-scale target tasks and target hardware platforms and apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design.

Neural Architecture Search with Random Labels

TLDR
A novel NAS framework based on ease-of-convergence hypothesis, which requires only random labels during searching, which achieves comparable or even better results compared with state-of theart NAS methods such as PC-DARTS, Single Path One-Shot, even though the counterparts utilize full ground truth labels for searching.

Evaluating the Search Phase of Neural Architecture Search

TLDR
This paper finds that on average, the state-of-the-art NAS algorithms perform similarly to the random policy; the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process.

Are Labels Necessary for Neural Architecture Search?

TLDR
The potentially surprising finding that labels are not necessary, and the image statistics alone may be sufficient to identify good neural architectures is revealed.

Random Search and Reproducibility for Neural Architecture Search

TLDR
This work proposes new NAS baselines that build off the following observations: (i) NAS is a specialized hyperparameter optimization problem; and (ii) random search is a competitive baseline for hyperparameters optimization.

Neural Architecture Search with Reinforcement Learning

TLDR
This paper uses a recurrent network to generate the model descriptions of neural networks and trains this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set.

SGAS: Sequential Greedy Architecture Search

Architecture design has become a crucial component of successful deep learning. Recent progress in automatic neural architecture search (NAS) shows a lot of promise. However, discovered architectures

Progressive Differentiable Architecture Search: Bridging the Depth Gap Between Search and Evaluation

TLDR
This paper presents an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure, and solves two issues, namely, heavier computational overheads and weaker search stability, which are solved using search space approximation and regularization.