HourNAS: Extremely Fast Neural Architecture Search Through an Hourglass Lens

@article{Yang2021HourNASEF,
  title={HourNAS: Extremely Fast Neural Architecture Search Through an Hourglass Lens},
  author={Zhaohui Yang and Yunhe Wang and Dacheng Tao and Xinghao Chen and Jianyuan Guo and Chunjing Xu and Chao Xu and Chang Xu},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={10891-10901}
}
  • Zhaohui Yang, Yunhe Wang, Chang Xu
  • Published 29 May 2020
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Neural Architecture Search (NAS) aims to automatically discover optimal architectures. In this paper, we propose an hourglass-inspired approach (HourNAS) for extremely fast NAS. It is motivated by the fact that the effects of the architecture often proceed from the vital few blocks. Acting like the narrow neck of an hourglass, vital blocks in the guaranteed path from the input to the output of a deep neural network restrict the information flow and influence the network accuracy. The other… 

FGA-NAS: Fast Resource-Constrained Architecture Search by Greedy-ADMM Algorithm

This paper proposes a novel condensed search space that merges multiple parallel-placed candidates into a single one that enables the gradient-based optimization for neural architecture search (NAS) under multiple combinatorial constraints and decomposes the constrained NAS into a few simple sub-problems.

Sandwich Batch Normalization: A Drop-In Replacement for Feature Distribution Heterogeneity

This work demonstrates the prevailing effectiveness of SaBN as a drop-in replacement in four tasks: conditional image generation, neural architecture search, adversarial training, and arbitrary style transfer, and provides visualizations and analysis to help understand why SaBN works.

Evaluation Ranking is More Important for NAS

The experimental results demonstrate that ERNAS can be trained effectively enough with extremely limited training data and the accuracy of the neural architecture search result produced by ERNAS is greater than that of the SOTA methods.

AutoCoMet: Smart Neural Architecture Search via Co-Regulated Shaping Reinforcement

This work proposes a smart, fast NAS framework that adapts to context via a generalized formalism for any kind of multi-criteria optimization that can learn the most suitable DNN architecture optimized for varied types of device hardware and task contexts, 3 × faster.

Neural Architecture Search for Spiking Neural Networks

This paper introduces a novel Neural Architecture Search (NAS) approach for finding better SNN architectures that can represent diverse spike activation patterns across different data samples without training, and shows that SNASNet achieves state-of-the-art performance with significantly lower timesteps.

Weight-Sharing Neural Architecture Search: A Battle to Shrink the Optimization Gap

A literature review on the application of NAS to computer vision problems is provided and existing approaches are summarized into several categories according to their efforts in bridging the gap.

Automatic Neural Network Pruning that Efficiently Preserves the Model Accuracy

An automatic pruning method that learns which neurons to preserve in order to maintain the model accuracy while reducing the FLOPs to a predefined target is proposed.

Dynamical Conventional Neural Network Channel Pruning by Genetic Wavelet Channel Search for Image Classification

A genetic wavelet channel search (GWCS) based pruning framework, where the pruning process is modeled as a multi-stage genetic optimization procedure, which demonstrates that GNAS outperforms state-of-the-art pruning algorithms in both accuracy and compression rate.

Profiling Neural Blocks and Design Spaces for Mobile Neural Architecture Search

This paper analyzes the neural blocks used to build Once-for-All (MobileNetV3), ProxylessNAS and ResNet families, in order to understand their predictive power and inference latency on various devices, and shows that searching in the reduced search space generates better accuracy-latency Pareto frontiers than searched in the original search spaces.

Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics

This work presents a unified framework to understand and accelerate NAS, by disentangling “TEG” characteristics of searched networks – Trainability, Expressivity, Generalization – all assessed in a training-free manner, leading to both improved search accuracy and over 2.3× reduction in search time cost.

References

SHOWING 1-10 OF 80 REFERENCES

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

A new scaling method is proposed that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient and is demonstrated the effectiveness of this method on scaling up MobileNets and ResNet.

FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search

This work proposes a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize ConvNet architectures, avoiding enumerating and training individual architectures separately as in previous methods.

Progressive Neural Architecture Search

We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary

Regularized Evolution for Image Classifier Architecture Search

This work evolves an image classifier---AmoebaNet-A---that surpasses hand-designs for the first time and gives evidence that evolution can obtain results faster with the same hardware, especially at the earlier stages of the search.

MobileNetV2: Inverted Residuals and Linear Bottlenecks

A new mobile architecture, MobileNetV2, is described that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes and allows decoupling of the input/output domains from the expressiveness of the transformation.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

DARTS: Differentiable Architecture Search

The proposed algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.

SCOP: Scientific Control for Reliable Neural Network Pruning

It is theoretically suggested that the knockoff condition can be approximately preserved given the information propagation of network layers, and can reduce 57.8% parameters and 60.2% FLOPs of ResNet-101 with only 0.01% top-1 accuracy loss on ImageNet.

Discernible Image Compression

Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models, and the method is named Discernible Image Compression (DIC).
...