• Corpus ID: 239016647

BNAS v2: Learning Architectures for Binary Networks with Empirical Improvements

@article{Kim2021BNASVL,
  title={BNAS v2: Learning Architectures for Binary Networks with Empirical Improvements},
  author={Dahyun Kim and Kunal Pratap Singh and Jonghyun Choi},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.08562}
}
Backbone architectures of most binary networks are well-known floating point (FP) architectures such as the ResNet family. Questioning that the architectures designed for FP networks might not be the best for binary networks, we propose to search architectures for binary networks (BNAS) by defining a new search space for binary architectures and a novel search objective. Specifically, based on the cell based search method, we define the new search space of binary layer types, design a new cell… 
Unsupervised Representation Learning for Binary Networks by Joint Classifier Learning
TLDR
Empirical validations over downstream tasks using seven datasets show that BURN outperforms self-supervised baselines for binary networks and sometimes outperforms supervised pretraining.

References

SHOWING 1-10 OF 72 REFERENCES
Learning Architectures for Binary Networks
TLDR
This work proposes to search architectures for binary networks (BNAS) by defining a new search space for binary architectures and a novel search objective, and designs a new cell template and proposes to use the Zeroise layer instead of using it as a placeholder.
BATS: Binary ArchitecTure Search
TLDR
A novel binary-oriented search space is introduced, a new mechanism for controlling and stabilising the resulting searched topologies are proposed, and a series of new search strategies for binary networks that lead to faster convergence and lower search times are proposed.
ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions
TLDR
This paper proposes to generalize the traditional Sign and PReLU functions to enable explicit learning of the distribution reshape and shift at near-zero extra cost and shows that the proposed ReActNet outperforms all the state-of-the-arts by a large margin.
Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?
  • Shilin Zhu, Xin Dong, Hao Su
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
The Binary Ensemble Neural Network (BENN) is proposed, which leverages ensemble methods to improve the performance of BNNs with limited efficiency cost and can even surpass the accuracy of the full-precision floating number network with the same architecture.
Joint Neural Architecture Search and Quantization
TLDR
This paper integrates the tasks of architecture design and model compression into one unified framework, which enables the joint architecture search with quantization (compression) policies for neural networks.
Graph HyperNetworks for Neural Architecture Search
TLDR
The GHN is proposed to amortize the search cost: given an architecture, it directly generates the weights by running inference on a graph neural network, which can predict network performance more accurately than regular hypernetworks and premature early stopping.
Neural Architecture Search with Reinforcement Learning
TLDR
This paper uses a recurrent network to generate the model descriptions of neural networks and trains this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set.
Mixed Precision Quantization of ConvNets via Differentiable Neural Architecture Search
TLDR
A novel differentiable neural architecture search (DNAS) framework is proposed to efficiently explore its exponential search space with gradient-based optimization and surpass the state-of-the-art compression of ResNet on CIFAR-10 and ImageNet.
Binarized Neural Architecture Search
TLDR
This work introduces channel sampling and operation space reduction into a differentiable NAS to significantly reduce the cost of searching and achieves a performance comparable to NAS on both CIFAR and ImageNet databases.
Efficient Neural Architecture Search via Parameter Sharing
TLDR
Efficient Neural Architecture Search is a fast and inexpensive approach for automatic model design that establishes a new state-of-the-art among all methods without post-training processing and delivers strong empirical performances using much fewer GPU-hours.
...
...