RelativeNAS: Relative Neural Architecture Search via Slow-Fast Learning

@article{Tan2021RelativeNASRN,
  title={RelativeNAS: Relative Neural Architecture Search via Slow-Fast Learning},
  author={Hao Tan and Ran Cheng and Shihua Huang and Cheng He and Changxiao Qiu and Fan Yang and Ping Luo},
  journal={IEEE transactions on neural networks and learning systems},
  year={2021},
  volume={PP}
}
  • Hao Tan, Ran Cheng, P. Luo
  • Published 14 September 2020
  • Computer Science
  • IEEE transactions on neural networks and learning systems
Despite the remarkable successes of convolutional neural networks (CNNs) in computer vision, it is time-consuming and error-prone to manually design a CNN. Among various neural architecture search (NAS) methods that are motivated to automate designs of high-performance CNNs, the differentiable NAS and population-based NAS are attracting increasing interests due to their unique characters. To benefit from the merits while overcoming the deficiencies of both, this work proposes a novel NAS method… 

Accelerating Multi-Objective Neural Architecture Search by Random-Weight Evaluation

A new performance estimation metric, named random-weight evaluation (RWE) is introduced to quantify the quality of CNNs in a cost-efficient manner and reveals the effectiveness of the proposed RWE in estimating the performance compared to existing methods.

CSNAS: Contrastive Self-Supervised Learning Neural Architecture Search Via Sequential Model-Based Optimization

A novel contrastive self-supervised neural architecture search algorithm, which completely alleviates the expensive costs of data labeling inherited from supervised learning, and tackles the inherent discrete search space of the NAS problem by sequential model-based optimization via the tree-parzen estimator.

Neural Architecture Search as Multiobjective Optimization Benchmarks: Problem Formulation and Performance Assessment

This work forms NAS tasks into general multi-objective optimization problems and analyzes the complex characteristics from an optimization point of view, and presents an end-to-end pipeline, dubbed EvoXBench, to generate benchmark test problems for EMO algorithms to run without the requirement of GPUs or Pytorch/TensorFlow.

Utilizing average symmetrical surface distance in active shape modeling for subcortical surface generation with slow-fast learning

An automatic pipeline for subcortical surface generation is proposed by making use of the average symmetrical surface distance (ASSD) loss in active shape modeling (ASM) and the effectiveness of the slow-fast learning method is shown by comparing it with other state-of-the-art derivative-free optimization algorithms.

A Survey on Surrogate-assisted Efficient Neural Architecture Search

This paper begins with a brief introduction to the general framework of NAS, followed by a description of surrogate-assisted NAS, which is divided into three different categories, namely Bayesian optimization for NAS, surrogate- assisted evolutionary algorithms forNAS, and MOP for NAS.

References

SHOWING 1-10 OF 72 REFERENCES

Multi-Criterion Evolutionary Design of Deep Convolutional Neural Networks

This work proposes an evolutionary algorithm for searching neural architectures under multiple objectives, such as classification performance and FLOPs, and addresses the first shortcoming by populating a set of architectures to approximate the entire Pareto frontier through genetic operations that recombine and modify architectural components progressively.

BlockQNN: Efficient Block-Wise Neural Network Architecture Generation

This paper provides a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy and proposes a distributed asynchronous framework and an early stop strategy.

Multiobjective Evolutionary Design of Deep Convolutional Neural Networks for Image Classification

This work proposes an evolutionary algorithm for searching neural architectures under multiple objectives, such as classification performance and floating point operations (FLOPs), and addresses the first shortcoming by populating a set of architectures to approximate the entire Pareto frontier through genetic operations that recombine and modify architectural components progressively.

NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection

The adopted Neural Architecture Search is adopted and a new feature pyramid architecture in a novel scalable search space covering all cross-scale connections is discovered, named NAS-FPN, which achieves better accuracy and latency tradeoff compared to state-of-the-art object detection models.

Efficient Architecture Search by Network Transformation

This paper proposes a new framework toward efficient architecture search by exploring the architecture space based on the current network and reusing its weights, and employs a reinforcement learning agent as the meta-controller, whose action is to grow the network depth or layer width with function-preserving transformations.

AutoGAN: Neural Architecture Search for Generative Adversarial Networks

This paper presents the first preliminary study on introducing the NAS algorithm to generative adversarial networks (GANs), dubbed AutoGAN, and discovers architectures that achieve highly competitive performance compared to current state-of-the-art hand-crafted GANs.

Progressive Neural Architecture Search

We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

This work introduces a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals and further merge RPN and Fast R-CNN into a single network by sharing their convolutionAL features.

MnasNet: Platform-Aware Neural Architecture Search for Mobile

An automated mobile neural architecture search (MNAS) approach, which explicitly incorporate model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency.

Genetic CNN

  • Lingxi XieA. Yuille
  • Computer Science
    2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
The core idea is to propose an encoding method to represent each network structure in a fixed-length binary string to efficiently explore this large search space.
...