Corpus ID: 232335696

W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality

@article{Ponzio2021W2WNetAT,
  title={W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality},
  author={Francesco Ponzio and Enrico Macii and Elisa Ficarra and Santa Di Cataldo},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.13107}
}
Convolutional Neural Networks (CNNs) are supposed to be fed with only high-quality annotated datasets. Nonetheless, in many real-world scenarios, such high quality is very hard to obtain, and datasets may be affected by any sort of image degradation and mislabelling issues. This negatively impacts the performance of standard CNNs, both during the training and the inference phase. To address this issue we propose Wise2WipedNet (W2WNet), a new two-module Convolutional Neural Network, where a Wise… Expand
1 Citations

Figures and Tables from this paper

Exploiting generative self-supervised learning for the assessment of biological images with lack of annotations: a COVID-19 case-study
TLDR
GAN-DL, a Discriminator Learner based on the StyleGAN2 architecture is presented, which is employed for self-supervised image representation learning in the case of fluorescent biological images, and it is shown that Wasserstein Generative Adversarial Networks combined with linear Support Vector Machines enable highthroughput compound screening based on raw images. Expand

References

SHOWING 1-10 OF 35 REFERENCES
Dealing with Lack of Training Data for Convolutional Neural Networks: The Case of Digital Pathology
TLDR
The findings demonstrate that transfer learning is a solution to the automated classification of histological samples and solves the problem of designing accurate and computationally-efficient CAD systems with limited training data. Expand
Effects of Degradations on Deep Neural Network Architectures
TLDR
This first study on the performance of CapsuleNet (CapsNet) and other state-of-the-art CNN architectures under different types of image degradations is demonstrated and a network setup is proposed that can enhance the robustness of any CNN architecture for certain degradation. Expand
Self-Error-Correcting Convolutional Neural Network for Learning with Noisy Labels
TLDR
A self-error-correcting CNN (SECCNN) to deal with the noisy labels problem, by simultaneously correcting the improbable labels and optimizing the deep model. Expand
Learning from massive noisy labeled data for image classification
TLDR
A general framework to train CNNs with only a limited number of clean labels and millions of easily obtained noisy labels is introduced and the relationships between images, class labels and label noises are model with a probabilistic graphical model and further integrate it into an end-to-end deep learning system. Expand
Exploiting “Uncertain” Deep Networks for Data Cleaning in Digital Pathology
TLDR
This study exploits Bayesian Convolutional Neural Networks inherent capability to automatize the data cleaning phase of histopathological image assessment to boost the accuracy of downstream classification by 15% at least. Expand
Deep learning with noisy labels: exploring techniques and remedies in medical image analysis
TLDR
The state-of-the-art in handling label noise in deep learning is reviewed and recommendations on methods that can be used to alleviate the effects of different types of label noise on deep models trained for medical image analysis are made. Expand
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective. Expand
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
TLDR
The DeepFool algorithm is proposed to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers, and outperforms recent methods in the task of computing adversarial perturbation and making classifiers more robust. Expand
Xception: Deep Learning with Depthwise Separable Convolutions
  • François Chollet
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
This work proposes a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions, and shows that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset, and significantly outperforms it on a larger image classification dataset. Expand
A Comprehensive guide to Bayesian Convolutional Neural Network with Variational Inference
TLDR
This paper predicts how certain the model prediction is based on the epistemic and aleatoric uncertainties and empirically shows how the uncertainty can decrease, allowing the decisions made by the network to become more deterministic as the training accuracy increases. Expand
...
1
2
3
4
...