L*ReLU: Piece-wise Linear Activation Functions for Deep Fine-grained Visual Categorization

@article{Basirat2020LReLUPL,
  title={L*ReLU: Piece-wise Linear Activation Functions for Deep Fine-grained Visual Categorization},
  author={Mina Basirat and Peter M. Roth},
  journal={2020 IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2020},
  pages={1207-1216}
}
  • Mina Basirat, P. Roth
  • Published 27 October 2019
  • Computer Science, Mathematics
  • 2020 IEEE Winter Conference on Applications of Computer Vision (WACV)
Deep neural networks paved the way for significant improvements in image visual categorization during the last years. However, even though the tasks are highly varying, differing in complexity and difficulty, existing solutions mostly build on the same architectural decisions. This also applies to the selection of activation functions (AFs), where most approaches build on Rectified Linear Units (Re- LUs). In this paper, however, we show that the choice of a proper AF has a significant impact on… 
2 Citations
A Comprehensive Survey and Performance Analysis of Activation Functions in Deep Learning
TLDR
A comprehensive overview and survey is presented for AFs in neural networks for deep learning, covering different classes of AFs such as Logistic Sigmoid and Tanh based, ReLU based, ELU based, and Learning based.
Deep learning in electron microscopy
  • Jeffrey M. Ede
  • Computer Science, Physics
    Mach. Learn. Sci. Technol.
  • 2021
TLDR
This review paper offers a practical perspective aimed at developers with limited familiarity of deep learning in electron microscopy that discusses hardware and software needed to get started with deep learning and interface with electron microscopes.

References

SHOWING 1-10 OF 59 REFERENCES
Part-Stacked CNN for Fine-Grained Visual Categorization
TLDR
A novel Part-Stacked CNN architecture that explicitly explains the finegrained recognition process by modeling subtle differences from object parts is proposed, from multiple perspectives of classification accuracy, model interpretability, and efficiency.
Bilinear CNN Models for Fine-Grained Visual Recognition
We propose bilinear models, a recognition architecture that consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an
Kernel Pooling for Convolutional Neural Networks
TLDR
This work demonstrates how to approximate kernels such as Gaussian RBF up to a given order using compact explicit feature maps in a parameter-free manner and proposes a general pooling framework that captures higher order interactions of features in the form of kernels.
Higher-Order Integration of Hierarchical Convolutional Activations for Fine-Grained Visual Categorization
TLDR
This work proposes an end-to-end framework based on higherorder integration of hierarchical convolutional activations for FGVC that yields more discriminative representation and achieves competitive results on the widely used FGVC datasets.
Parametric Exponential Linear Unit for Deep Convolutional Neural Networks
TLDR
The results on the MNIST, CIFAR-10/100 and ImageNet datasets using the NiN, Overfeat, All-CNN and ResNet networks indicate that the proposed Parametric ELU (PELU) has better performances than the non-parametricELU.
Neural Activation Constellations: Unsupervised Part Model Discovery with Convolutional Networks
TLDR
An approach is presented that is able to learn part models in a completely unsupervised manner, without part annotations and even without given bounding boxes during learning, to find constellations of neural activation patterns computed using convolutional neural networks.
Fisher Vectors for Fine-Grained Visual Categorization
TLDR
It is shown that the Fisher Vector (FV) - which describes an image by its deviation from an "average" model - is an excellent alternative to the BOV for the FGVC problem.
Empirical Evaluation of Rectified Activations in Convolutional Network
TLDR
The experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results, and are negative on the common belief that sparsity is the key of good performance in ReLU.
Stacked Semantics-Guided Attention Model for Fine-Grained Zero-Shot Learning
TLDR
A novel stacked semantics-guided attention (S2GA) model to obtain semantic relevant features by using individual class semantic features to progressively guide the visual features to generate an attention map for weighting the importance of different local regions.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
TLDR
This work proposes a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit and derives a robust initialization method that particularly considers the rectifier nonlinearities.
...
1
2
3
4
5
...