Fine-Tuning DARTS for Image Classification

@article{Tanveer2021FineTuningDF,
  title={Fine-Tuning DARTS for Image Classification},
  author={Muhammad Tanveer and Muhammad Umar Karim Khan and C. M. Kyung},
  journal={2020 25th International Conference on Pattern Recognition (ICPR)},
  year={2021},
  pages={4789-4796}
}
Neural Architecture Search (NAS) has gained attraction due to superior classification performance. Differential Architecture Search (DARTS) is a computationally light method. To limit computational resources DARTS makes numerous approximations. These approximations result in inferior performance. We propose to fine-tune DARTS using fixed operations as they are independent of these approximations. Our method offers a good trade-off between the number of parameters and classification accuracy… Expand
A Novel Evolutionary Algorithm for Hierarchical Neural Architecture Search
TLDR
This work proposes a novel evolutionary algorithm for neural architecture search, applicable to global search spaces, that organizes the topology in multiple hierarchical modules, while the design process exploits this representation, in order to explore the search space. Expand
Evaluating State-of-the-Art Classification Models Against Bayes Optimality
TLDR
This work shows that it is possible to compute the exact Bayes error of generative models learned using normalizing flows by computing it for Gaussian base distributions, which can be done efficiently using Holmes-Diaconis-Ross integration. Expand
How to Simplify Search: Classification-wise Pareto Evolution for One-shot Neural Architecture Search
  • Lianbo Ma, Nan Li, Guo Yu, Xiao Geng, Min Huang, Xingwei Wang
  • Computer Science
  • ArXiv
  • 2021
TLDR
This study proposes a classificationwise Pareto evolution approach for one-shot NAS, where an online classifier is trained to predict the dominance relationship between the candidate and constructed reference architectures, instead of using surrogates to fit the objective functions. Expand
Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
TLDR
A new method is proposed that avoids predicting U but directly learns Y = f( X) by training f(X) with SX to predict h(U) which is trained with SY to approximate Y . Expand
Shapley Explanation Networks
TLDR
This work proposes to incorporate Shapley values themselves as latent representations in deep models—thereby making Shapley explanations first-class citizens in the modeling paradigm, and demonstrates on synthetic and real-world datasets that their SHAPNETs enable layer-wise Shapley explained, novel Shapley regularizations during training, and fast computation while maintaining reasonable performance. Expand

References

SHOWING 1-10 OF 36 REFERENCES
Regularized Evolution for Image Classifier Architecture Search
TLDR
This work evolves an image classifier---AmoebaNet-A---that surpasses hand-designs for the first time and gives evidence that evolution can obtain results faster with the same hardware, especially at the earlier stages of the search. Expand
Learning Transferable Architectures for Scalable Image Recognition
TLDR
This paper proposes to search for an architectural building block on a small dataset and then transfer the block to a larger dataset and introduces a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. Expand
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective. Expand
Improved Regularization of Convolutional Neural Networks with Cutout
TLDR
This paper shows that the simple regularization technique of randomly masking out square regions of input during training, which is called cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Expand
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
EDeN: Ensemble of Deep Networks for Vehicle Classification
TLDR
Experimental results show that ensemble of networks gives better performance compared to individual networks and it is robust to noise. Expand
Stabilizing DARTS with Amended Gradient Estimation on Architectural Parameters
TLDR
The proposed amended estimation method bridges the gap from two aspects, namely, amending the estimation on the architectural gradients, and unifying the hyper-parameter settings in the search and re-training stages, and enables DARTS-based approaches to explore much larger search spaces that have not been investigated before. Expand
DARTS+: Improved Differentiable Architecture Search with Early Stopping
TLDR
It is claimed that there exists overfitting in the optimization of DARTS, and a simple and effective algorithm is proposed, named "DARTS+", to avoid the collapse and improve the original DARts, by "early stopping" the search procedure when meeting a certain criterion. Expand
Progressive Differentiable Architecture Search: Bridging the Depth Gap Between Search and Evaluation
TLDR
This paper presents an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure, and solves two issues, namely, heavier computational overheads and weaker search stability, which are solved using search space approximation and regularization. Expand
DARTS: Differentiable Architecture Search
TLDR
The proposed algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Expand
...
1
2
3
4
...