• Corpus ID: 244908529

Exploring Complicated Search Spaces with Interleaving-Free Sampling

@article{Tian2021ExploringCS,
  title={Exploring Complicated Search Spaces with Interleaving-Free Sampling},
  author={Yunjie Tian and Lingxi Xie and Jiemin Fang and Jianbin Jiao and Qixiang Ye and Qi Tian},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.02488}
}
The existing neural architecture search algorithms are mostly working on search spaces with short-distance connections. We argue that such designs, though safe and stable, obstacles the search algorithms from exploring more complicated scenarios. In this paper, we build the search algorithm upon a complicated search space with long-distance connections, and show that existing weightsharing search algorithms mostly fail due to the existence of interleaved connections. Based on the observation… 

References

SHOWING 1-10 OF 30 REFERENCES

Densely Connected Search Space for More Flexible Neural Architecture Search

This paper proposes to search block counts and block widths by designing a densely connected search space, i.e., DenseNAS, represented as a dense super network, which is built upon the designed routing blocks.

PC-DARTS: Partial Channel Connections for Memory-Efficient Architecture Search

This paper presents a novel approach, namely Partially-Connected DARTS, by sampling a small part of super-net to reduce the redundancy in exploring the network space, thereby performing a more efficient search without comprising the performance.

Progressive Differentiable Architecture Search: Bridging the Depth Gap Between Search and Evaluation

This paper presents an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure, and solves two issues, namely, heavier computational overheads and weaker search stability, which are solved using search space approximation and regularization.

GOLD-NAS: Gradual, One-Level, Differentiable

This paper first relaxes manually designed constraints and enlarge the search space to contain more than $10^{160}$ candidates, and proposes a novel algorithm named Gradual One-Level Differentiable Neural Architecture Search (GOLD-NAS) which introduces a variable resource constraint to one-level optimization.

StacNAS: Towards stable and consistent optimization for differentiable Neural Architecture Search

A grouped variable pruning algorithm based on one-level optimization, which leads to a more stable and consistent optimization solution for differentiable NAS, which obtains state-of-the-art accuracy and stability.

Efficient Neural Architecture Search via Parameter Sharing

Efficient Neural Architecture Search is a fast and inexpensive approach for automatic model design that establishes a new state-of-the-art among all methods without post-training processing and delivers strong empirical performances using much fewer GPU-hours.

DARTS+: Improved Differentiable Architecture Search with Early Stopping

It is claimed that there exists overfitting in the optimization of DARTS, and a simple and effective algorithm is proposed, named "DARTS+", to avoid the collapse and improve the original DARts, by "early stopping" the search procedure when meeting a certain criterion.

Discretization-Aware Architecture Search

Understanding and Robustifying Differentiable Architecture Search

It is shown that by adding one of various types of regularization to DARTS, one can robustify DARTS to find solutions with less curvature and better generalization properties, and proposes several simple variations of DARTS that perform substantially more robustly in practice.

Single Path One-Shot Neural Architecture Search with Uniform Sampling

A Single Path One-Shot model is proposed to construct a simplified supernet, where all architectures are single paths so that weight co-adaption problem is alleviated.