Self-Organized Operational Neural Networks with Generative Neurons

@article{Kiranyaz2020SelfOrganizedON,
  title={Self-Organized Operational Neural Networks with Generative Neurons},
  author={Serkan Kiranyaz and Junaid Malik and Habib Ben Abdallah and Turker Ince and Alexandros Iosifidis and M. Gabbouj},
  journal={Neural networks : the official journal of the International Neural Network Society},
  year={2020},
  volume={140},
  pages={
          294-308
        }
}
  • S. KiranyazJunaid Malik M. Gabbouj
  • Published 24 April 2020
  • Computer Science
  • Neural networks : the official journal of the International Neural Network Society

Super Neurons

Self-Organized ONNs (SelfONNs) composed with the generative neurons can achieve an utmost level of diversity even with a compact configuration but still suffers from the last property that was inherited from the CNNs: localized kernel operations which imposes a severe limitation to the information flow between layers.

Convolutional versus Self-Organized Operational Neural Networks for Real-World Blind Image Denoising

Extensive quantitative and qualitative evaluations spanning multiple metrics and four highresolution real-world noisy image datasets against the state-ofthe-art deep CNN network, DnCNN, reveal that deep Self-ONNs consistently achieve superior results with performance gains of up to 1.76dB in PSNR.

Bm3d Vs 2-Layer Onn

This study aims to find out whether compact neural networks can learn to produce competitive results as compared to BM3D for AWGN image denoising and shows that the recently proposed self-organized variant of operational neural networks based on a generative neuron model (Self-ONNs) is not only a better choice asCompared to CNNs, but also provides competitive results.

Image denoising by Super Neurons: Why go deep?

This study draws the focus on the Self-organized Operational Neural Networks empowered by a novel neuron model that can achieve a similar or better denoising performance with a compact and shallow model and proposes a trade-off between the heterogeneity of non-localized operations and computational complexity.

Self-Organized Variational Autoencoders (Self-Vae) For Learned Image Compression

This paper proposes to replace the convolutional and GDN layers in the variational autoencoder with self-organized operational layers, and proposes a novelSelf-VAE architecture that benefits from stronger non-linearity.

Robust Peak Detection for Holter ECGs by Self-Organized Operational Neural Networks

The experimental results over the China Physiological Signal Challenge-2020 dataset show that the proposed 1-D Self-Organized ONNs (Self-ONNs) can significantly surpass the state-of-the-art deep CNN with less computational complexity.

Real-Time Patient-Specific ECG Classification by 1D Self-Operational Neural Networks

The results over the MIT-BIH arrhythmia benchmark database demonstrate that 1D Self-ONNs can surpass 1D CNNs with a significant margin while having a similar computational complexity.

References

SHOWING 1-10 OF 65 REFERENCES

Operational neural networks

This study proposes a novel network model, called operational neural networks (ONNs), which can be heterogeneous and encapsulate neurons with any set of operators to boost diversity and to learn highly complex and multi-modal functions or spaces with minimal network complexity and training data.

Generalized model of biological neural networks: Progressive operational perceptrons

A novel feed-forward ANN model, Generalized Operational Perceptrons (GOPs) that consist of neurons with distinct (non-)linear operators to achieve a generalized model of the biological neurons and ultimately a superior diversity is introduced.

Progressive Operational Perceptron with Memory

Major modifications that can accelerate as well as augment the progressive learning procedure of POP by incorporating an information-preserving, linear projection path from the input to the output layer at each progressive step are proposed.

Greedy Layer-Wise Training of Deep Networks

These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.

Non-local Color Image Denoising with Convolutional Neural Networks

  • Stamatios Lefkimmiatis
  • Computer Science
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
This work proposes a novel deep network architecture for grayscale and color image denoising that is based on a non-local image model and highlights a direct link of the proposed non- local models to convolutional neural networks.

Heterogeneous Multilayer Generalized Operational Perceptron

An efficient algorithm to learn a compact, fully heterogeneous multilayer network that allows each individual neuron, regardless of the layer, to have distinct characteristics is proposed.

U-Net: Convolutional Networks for Biomedical Image Segmentation

It is shown that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.

Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion

This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.
...