• Corpus ID: 218674231

Deep Generative Modeling of Periodic Variable Stars Using Physical Parameters

@article{MartinezPalomera2020DeepGM,
  title={Deep Generative Modeling of Periodic Variable Stars Using Physical Parameters},
  author={Jorge Mart'inez-Palomera and Joshua S. Bloom and Ellianna S. Abrahams},
  journal={arXiv: Instrumentation and Methods for Astrophysics},
  year={2020}
}
The ability to generate physically plausible ensembles of variable sources is critical to the optimization of time-domain survey cadences and the training of classification models on datasets with few to no labels. Traditional data augmentation techniques expand training sets by reenvisioning observed exemplars, seeking to simulate observations of specific training sources under different (exogenous) conditions. Unlike fully theory-driven models, these approaches do not typically allow… 

Machine-learning Kondo physics using variational autoencoders

TLDR
This paper presents a new approach to condensed matter nanomaterials called “computational science initiative”, which combines computer simulation and physical measurements to solve the mystery of how nanofiltration occurs in nanoporous materials.

Machine learning of Kondo physics using variational autoencoders and symbolic regression

TLDR
This paper presents a new approach to condensed matter nanomaterials called “computational science initiative”, which combines computer simulation and physical measurements to solve the mystery of how nanofiltration occurs in nanoporous materials.

References

SHOWING 1-6 OF 6 REFERENCES

Galaxy Image Simulation Using Progressive GANs

TLDR
The proposed solution generates naturalistic images of galaxies that show complex structures and high diversity, which suggests that data-driven simulations using machine learning can replace many of the expensive model-driven methods used in astronomical data processing.

Understanding disentangling in $\beta$-VAE

TLDR
A modification to the training regime of $\ beta$-VAE is proposed, that progressively increases the information capacity of the latent code during training, to facilitate the robust learning of disentangled representations in $\beta$- VAE, without the previous trade-off in reconstruction accuracy.

An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling

TLDR
A systematic evaluation of generic convolutional and recurrent architectures for sequence modeling concludes that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutionals should be regarded as a natural starting point for sequence modeled tasks.

Uncertainty Quantification with Generative Models

TLDR
A generative model-based approach to Bayesian inverse problems, such as image reconstruction from noisy and incomplete images, that enables computationally tractable uncertainty quantification in the form of posterior analysis in latent and data space.

Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation

TLDR
Qualitatively, the proposed RNN Encoder‐Decoder model learns a semantically and syntactically meaningful representation of linguistic phrases.

Pulsating Stars Cho

  • 2014