• Corpus ID: 235658277

Poisoning the Search Space in Neural Architecture Search

  title={Poisoning the Search Space in Neural Architecture Search},
  author={Robert Wu and Nayan Saxena and Rohan Jain},
Deep learning has proven to be a highly effective problem-solving tool for object detection and image segmentation across various domains such as healthcare and autonomous driving. At the heart of this performance lies neural architecture design which relies heavily on domain knowledge and prior experience on the researchers’ behalf. More recently, this process of finding the most optimal architectures, given an initial search space of possible operations, was automated by Neural Architecture… 

Figures and Tables from this paper



Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

This work proposes a novel poisoning algorithm based on the idea of back-gradient optimization, able to target a wider class of learning algorithms, trained with gradient-based procedures, including neural networks and deep learning architectures, and empirically evaluates its effectiveness on several application examples.

When NAS Meets Robustness: In Search of Robust Architectures Against Adversarial Attacks

This work takes an architectural perspective and investigates the patterns of network architectures that are resilient to adversarial attacks, and discovers a family of robust architectures (RobNets) that exhibit superior robustness performance to other widely used architectures.

Generative Poisoning Attack Method Against Neural Networks

This work first examines the possibility of applying traditional gradient-based method to generate poisoned data against NNs by leveraging the gradient of the target model w.r.t. the normal data, and proposes a generative method to accelerate the generation rate of the poisoned data.

A Survey on Neural Architecture Search

This survey provides a formalism which unifies and categorizes the landscape of existing methods along with a detailed analysis that compares and contrasts the different approaches.

Learning Transferable Architectures for Scalable Image Recognition

This paper proposes to search for an architectural building block on a small dataset and then transfer the block to a larger dataset and introduces a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models.

Towards evolving robust neural architectures to defend from adversarial attacks

The results here demonstrate that more robust architectures exist as well as opens up a new range of possibilities for the development and exploration of neural networks using neural architecture search.

Evaluating the Search Phase of Neural Architecture Search

This paper finds that on average, the state-of-the-art NAS algorithms perform similarly to the random policy; the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process.

Regularized Evolution for Image Classifier Architecture Search

This work evolves an image classifier---AmoebaNet-A---that surpasses hand-designs for the first time and gives evidence that evolution can obtain results faster with the same hardware, especially at the earlier stages of the search.

Searching for Efficient Multi-Scale Architectures for Dense Image Prediction

This work constructs a recursive search space for meta-learning techniques for dense image prediction focused on the tasks of scene parsing, person-part segmentation, and semantic image segmentation and demonstrates that even with efficient random search, this architecture can outperform human-invented architectures.

DARTS: Differentiable Architecture Search

The proposed algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.