• Corpus ID: 236965703

The information of attribute uncertainties: what convolutional neural networks can learn about errors in input data

  title={The information of attribute uncertainties: what convolutional neural networks can learn about errors in input data},
  author={Nat'alia V. N. Rodrigues and L. Raul Abramo and Nina Sumiko Tomita Hirata},
Errors in measurements are key to weighting the value of data, but are often neglected in Machine Learning (ML). We show how Convolutional Neural Networks (CNNs) are able to learn about the context and patterns of signal and noise, leading to improvements in the performance of classification methods. We construct a model whereby two classes of objects follow an underlying Gaussian distribution, and where the features (the input data) have varying, but known, levels of noise. This model mimics… 
1 Citations

A self-supervised learning approach for astronomical images

A self-supervised learning approach that makes use of astronomical properties (more specifically, magnitudes) of the objects in order to pretrain deep neural networks with unlabeled data is proposed and empirically demonstrated that this approach yields results that are better than – or at least comparable to – a benchmark RGB model pretrained on ImageNet.



Probabilistic Random Forest: A Machine Learning Algorithm for Noisy Data Sets

Apart from improving the prediction accuracy in noisy data sets, the PRF naturally copes with missing values in the data, and outperforms RF when applied to a data set with different noise characteristics in the training and test sets, suggesting that it can be used for transfer learning.

Machine Learning with Known Input Data Uncertainty Measure

It is proved that the approach based on the classical training with jitter for Artificial Neural Networks (ANNs) is approximately equivalent to generalised Tikhonov regularisation learning.

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

Deeply uncertain: comparing methods of uncertainty quantification in deep learning algorithms

A comparison of methods for uncertainty quantification in deep learning algorithms in the context of a simple physical system is presented in terms of simulated experimental measurements of a single pendulum—a prototypical physical system for studying measurement and analysis techniques.

Machine learning - a probabilistic perspective

  • K. Murphy
  • Computer Science
    Adaptive computation and machine learning series
  • 2012
This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

Support Vector Classification with Input Data Uncertainty

A novel formulation of support vector classification is proposed, which allows uncertainty in input data, and an intuitive geometric interpretation is derived of the proposed formulation, and algorithms to efficiently solve it are developed.

Morphological Classification of galaxies by Artificial Neural Networks

It is shown that the neural network behaves in the problem as a Bayesian classifier, i.e. it assigns the a posteriori probability for each of the five classes considered in the catalogue, and the network highest probability choice agrees with the catalogue classification.

Estimating photometric redshifts with artificial neural networks

A new approach to estimating photometric redshifts ‐ using artificial neural networks (ANNs) ‐ is investigated. Unlike the standard template-fitting photometric redshift technique, a large