An unbiased estimator of the full-sky CMB angular power spectrum at large scales using neural networks

  title={An unbiased estimator of the full-sky CMB angular power spectrum at large scales using neural networks},
  author={Pallav Chanda and Rajib Saha},
  journal={Monthly Notices of the Royal Astronomical Society},
  • Pallav Chanda, R. Saha
  • Published 8 February 2021
  • Physics, Computer Science
  • Monthly Notices of the Royal Astronomical Society
Accurate estimation of the Cosmic Microwave Background (CMB) angular power spectrum is enticing due to the prospect for precision cosmology it presents. Galactic foreground emissions, however, contaminate the CMB signal and need to be subtracted reliably in order to lessen systematic errors on the CMB temperature estimates. Typically bright foregrounds in a region lead to further uncertainty in temperature estimates in the area even after some foreground removal technique is performed and… Expand


What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks. Expand
Adam: A Method for Stochastic Optimization
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Expand
Neural Networks for Pattern Recognition
The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue. Expand
An overview of gradient descent optimization algorithms
This article looks at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent. Expand
2017, Concrete Dropout
  • 2017
TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, whitepaper2015.pdf
  • 2015
rmsprop: Divide the gradient by a running average of its recent magnitude
  • Neural networks for machine learning-Lecture
  • 2012