NL-CNN: A Resources-Constrained Deep Learning Model based on Nonlinear Convolution

@article{Dogaru2021NLCNNAR,
  title={NL-CNN: A Resources-Constrained Deep Learning Model based on Nonlinear Convolution},
  author={Radu Dogaru and Ioana Dogaru},
  journal={2021 12th International Symposium on Advanced Topics in Electrical Engineering (ATEE)},
  year={2021},
  pages={1-4}
}
  • R. Dogaru, I. Dogaru
  • Published 30 January 2021
  • Computer Science
  • 2021 12th International Symposium on Advanced Topics in Electrical Engineering (ATEE)
A novel convolution neural network model, abbreviated NL-CNN is proposed, where nonlinear convolution is emulated in a cascade of convolution + nonlinearity layers. The code for its implementation and some trained models are made publicly available. Performance evaluation for several widely known datasets is provided, showing several relevant features: i) for small / medium input image sizes the proposed network gives very good testing accuracy, given a low implementation complexity and model… 

Figures and Tables from this paper

Fast Training of Light Binary Convolutional Neural Networks using Chainer and Cupy

  • R. DogaruI. Dogaru
  • Computer Science
    2020 12th International Conference on Electronics, Computers and Artificial Intelligence (ECAI)
  • 2020
TLDR
This paper shows that a significant speedup of more than 60 times can be achieved while employing the Chainer environment instead the more traditional Keras/Tensorflow, showing very good compromise between accuracy and complexity.

References

SHOWING 1-10 OF 21 REFERENCES

RD-CNN: A Compact and Efficient Convolutional Neural Net for Sound Classification

  • R. DogaruI. Dogaru
  • Computer Science
    2020 International Symposium on Electronics and Telecommunications (ISETC)
  • 2020
TLDR
Preliminary results for a novel recognition system where a compact and fast transform, the reaction-diffusion transform (RDT), is used to generate spectral images that are processed into a novel type of compact convolution neural network (called NL-CNN) where nonlinear convolution is emulated.

BCONV - ELM: Binary Weights Convolutional Neural Network Simulator based on Keras/Tensorflow, for Low Complexity Implementations

  • R. DogaruI. Dogaru
  • Computer Science
    2019 6th International Symposium on Electrical and Electronics Engineering (ISEEE)
  • 2019
TLDR
Performance evaluation on several benchmark datasets show that with a proper tuning of the convolutional layers structure, good accuracies are achieved without additional dense hidden layers thus minimizing the required computational resources.

Non-linear Convolution Filters for CNN-Based Learning

TLDR
This work addresses the issue of developing a convolution method in the context of a computational model of the visual cortex, exploring quadratic forms through the Volterra kernels, and shows that a network which combines linear and non-linear filters in its convolutional layers, can outperform networks that use standard linear filters with the same architecture.

Effnet: An Efficient Structure for Convolutional Neural Networks

TLDR
EffNet is a novel convolution block which significantly reduces the computational burden while surpassing the current state-of-the-art and is created to tackle issues in existing models such as MobileNet and ShuffleNet.

A Survey of Handwritten Character Recognition with MNIST and EMNIST

TLDR
This paper summarizes the top state-of-the-art contributions reported on the MNIST dataset for handwritten digit recognition, and makes a distinction between works using some kind of data augmentation and works using the original dataset out of the box.

MobiExpressNet: A Deep Learning Network for Face Expression Recognition on Smart Phones

  • S. Cotter
  • Computer Science
    2020 IEEE International Conference on Consumer Electronics (ICCE)
  • 2020
TLDR
A new lightweight Deep Learning model, MobiExpressNet, is introduced for FER which relies on depthwise separable convolutions to limit the complexity, and adopts a fast downsampling approach together with few layers in the architecture to keep the model size very small.

MobileNetV2: Inverted Residuals and Linear Bottlenecks

TLDR
A new mobile architecture, MobileNetV2, is described that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes and allows decoupling of the input/output domains from the expressiveness of the transformation.

Facial Expression Recognition using Convolutional Neural Networks: State of the Art

TLDR
This paper reviews the state of the art in image-based facial expression recognition using CNNs and highlights algorithmic differences and their performance impact and demonstrates that overcoming one of these bottlenecks - the comparatively basic architectures of the CNNs utilized in this field - leads to a substantial performance increase.

Reading Digits in Natural Images with Unsupervised Feature Learning

TLDR
A new benchmark dataset for research use is introduced containing over 600,000 labeled digits cropped from Street View images, and variants of two recently proposed unsupervised feature learning methods are employed, finding that they are convincingly superior on benchmarks.

Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

TLDR
Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits.