HM-NAS: Efficient Neural Architecture Search via Hierarchical Masking

@article{Yan2019HMNASEN,
  title={HM-NAS: Efficient Neural Architecture Search via Hierarchical Masking},
  author={Shen Yan and Biyi Fang and Faen Zhang and Yu Zheng and Xiao Zeng and Hui Xu and Mi Zhang},
  journal={2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)},
  year={2019},
  pages={1942-1950}
}
  • Shen Yan, Biyi Fang, +4 authors Mi Zhang
  • Published 31 August 2019
  • Computer Science, Mathematics
  • 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
The use of automatic methods, often referred to as Neural Architecture Search (NAS), in designing neural network architectures has recently drawn considerable attention. [...] Key Method HM-NAS addresses this limitation via two innovations. First, HM-NAS incorporates a multi-level architecture encoding scheme to enable searching for more flexible network architectures.Expand
DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search
TLDR
DA-NAS is presented that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner, and supports an argument search space to efficiently search the best-performing architecture. Expand
BETANAS: BalancEd TrAining and selective drop for Neural Architecture Search
TLDR
This work proposes a novel neural architecture search method with balanced training strategy to ensure fair comparisons and a selective drop mechanism to reduce conflicts among candidate paths that outperforms other state-of-the-art methods in both accuracy and efficiency. Expand
Adapting Neural Architectures Between Domains
TLDR
The theoretical analyses lead to AdaptNAS, a novel and principled approach to adapt neural architectures between domains in NAS, which shows that only a small part of ImageNet will be sufficient for AdaptNAS to extend its architecture success to the entire ImageNet and outperform state-of the-art comparison algorithms. Expand
PONAS: Progressive One-shot Neural Architecture Search for Very Efficient Deployment
  • Sian-Yao Huang, W. Chu
  • Computer Science
  • 2021 International Joint Conference on Neural Networks (IJCNN)
  • 2021
TLDR
In PONAS, a two-stage training scheme is proposed that combines advantages of progressive NAS and one-shot methods, including the meta training stage and the fine-tuning stage, to make the search process efficient and stable. Expand
RoCo-NAS: Robust and Compact Neural Architecture Search
TLDR
This paper proposes the use of previously generated adversarial examples as an objective to evaluate the robustness of models in addition to the number of floating-point operations to assess model complexity i.e. compactness, and evolves an architecture that is up to 7% more accurate against adversarial samples than its more complex architecture counterpart. Expand
MiLeNAS: Efficient Neural Architecture Search via Mixed-Level Reformulation
TLDR
It is shown that even when using a simple first-order method on the mixed-level formulation, MiLeNAS can achieve a lower validation error for NAS problems, and architectures obtained by the method achieve consistently higher accuracies than those obtained from bilevel optimization. Expand
Weight-Sharing Neural Architecture Search: A Battle to Shrink the Optimization Gap
TLDR
A literature review on the application of NAS to computer vision problems is provided and existing approaches are summarized into several categories according to their efforts in bridging the gap. Expand
A Framework for Exploring and Modelling Neural Architecture Search Methods
TLDR
This paper aims to close this knowledge gap by summarising search decisions and strategies and proposing a schematic framework that applies quantitative and qualitative metrics for prototyping, comparing, and benchmarking the NAS methods. Expand
Delve into the Performance Degradation of Differentiable Architecture Search
TLDR
It is conjecture that the performance of DARTS does not depend on the well-trained supernet weights and it is argued that the architecture parameters should be trained by the gradients which are obtained in the early stage rather than the final stage of training. Expand
Efficient Oct Image Segmentation Using Neural Architecture Search
TLDR
The experimental results demonstrate that the self-adapting NAS-Unet architecture substantially outperformed the competitive human-designed architecture by achieving 95.4% in mean Intersection over Union metric and 78.7% in Dice similarity coefficient. Expand
...
1
2
...

References

SHOWING 1-10 OF 26 REFERENCES
Evaluating the Search Phase of Neural Architecture Search
TLDR
This paper finds that on average, the state-of-the-art NAS algorithms perform similarly to the random policy; the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process. Expand
Efficient Neural Architecture Search via Parameter Sharing
TLDR
Efficient Neural Architecture Search is a fast and inexpensive approach for automatic model design that establishes a new state-of-the-art among all methods without post-training processing and delivers strong empirical performances using much fewer GPU-hours. Expand
Random Search and Reproducibility for Neural Architecture Search
TLDR
This work proposes new NAS baselines that build off the following observations: (i) NAS is a specialized hyperparameter optimization problem; and (ii) random search is a competitive baseline for hyperparameters optimization. Expand
Neural Architecture Optimization
TLDR
Experiments show that the architecture discovered by this simple and efficient method to automatic neural architecture design based on continuous optimization is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods with a significantly reduction of computational resources. Expand
SNAS: Stochastic Neural Architecture Search
TLDR
It is proved that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently, and is further augmented with locally decomposable reward to enforce a resource-efficient constraint. Expand
Single Path One-Shot Neural Architecture Search with Uniform Sampling
TLDR
A Single Path One-Shot model is proposed to construct a simplified supernet, where all architectures are single paths so that weight co-adaption problem is alleviated. Expand
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
TLDR
ProxylessNAS is presented, which can directly learn the architectures for large-scale target tasks and target hardware platforms and apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design. Expand
FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search
TLDR
This work proposes a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize ConvNet architectures, avoiding enumerating and training individual architectures separately as in previous methods. Expand
SMASH: One-Shot Model Architecture Search through HyperNetworks
TLDR
A technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model's architecture is proposed, achieving competitive performance with similarly-sized hand-designed networks. Expand
Regularized Evolution for Image Classifier Architecture Search
TLDR
This work evolves an image classifier---AmoebaNet-A---that surpasses hand-designs for the first time and gives evidence that evolution can obtain results faster with the same hardware, especially at the earlier stages of the search. Expand
...
1
2
3
...