Smallest neural network to learn the Ising criticality.

@article{Kim2018SmallestNN,
  title={Smallest neural network to learn the Ising criticality.},
  author={Dongkyu Kim and Dong-Hee Kim},
  journal={Physical review. E},
  year={2018},
  volume={98 2-1},
  pages={
          022138
        }
}
Learning with an artificial neural network encodes the system behavior in a feed-forward function with a number of parameters optimized by data-driven training. An open question is whether one can minimize the network complexity without loss of performance to reveal how and why it works. Here we investigate the learning of the phase transition in the Ising model and find that having two hidden neurons can be enough for an accurate prediction of critical temperature. We show that the networks… 

Figures from this paper

Phase transition encoded in neural network

This study attempts to understand how temperature-supervised neural networks capture the information of phase transition by paying attention to what quantities they learn, and reveals that they learn different physical quantities depending on how well they are trained.

Emergence of a finite-size-scaling function in the supervised learning of the Ising phase transition

It is shown that just one free parameter is capable of describing the data-driven emergence of the universal finite-size-scaling function in the network output that is observed in a large neural network, theoretically validating its critical point prediction for unseen test data from different underlying lattices yet in the same universality class of the Ising criticality.

On the generalizability of artificial neural networks in spin models

It is suggested that nontrivial information of multiple-state systems can be encoded in a representation of far fewer states, and the amount of ANNs required in the study of spin models can potentially be reduced.

On the generalizability of artificial neural networks in spin models

By using the task of identifying phase transitions in spin models, this work establishes a systematic generalizability such that simple ANNs trained with the two-dimensional ferromagnetic Ising model can be applied to the ferrom electromagnetic q -state Potts model in different dimensions.

Deep learning on the 2-dimensional Ising model to extract the crossover region with a variational autoencoder

The 2-dimensional Ising model on a square lattice is investigated with a variational autoencoder in the non-vanishing field case for the purpose of extracting the crossover region between the

Short sighted deep learning

In contrast to results for nearest-neighbor Ising, the RBM flow for the long-ranged model does not converge to the correct values for the spin and energy scaling dimension, and correlation functions between visible and hidden nodes exhibit key differences between the stacked RBM and RG flows.

Learning the Ising Model with Generative Neural Networks

The results suggest that the considered RBMs and convolutional VAEs are able to capture the temperature dependence of magnetization, energy, and spin-spin correlations, and the samples generated by RBMs are more evenly distributed across temperature than those generated by VAEs.

A cautionary tale for machine learning generated configurations in presence of a conserved quantity

This work investigates the performance of machine learning algorithms trained exclusively with configurations obtained from importance sampling Monte Carlo simulations of the two-dimensional Ising model with conserved magnetization and finds that restricted Boltzmann machines generate configurations with magnetizations and energies forbidden in the original physical system.

Machine learning generated configurations in presence of a conserved quantity: a cautionary tale

It is found that RBM is incapable of recognizing the conserved quantity and generates configurations with magnetizations and energies forbidden in the original physical system, and shortcomings are also encountered when training RBM with configurations obtained from the non-conserved Ising model.

The critical temperature of the 2D-Ising model through deep learning autoencoders

It is demonstrated that Tc(L) extrapolates to the known theoretical value as L →∞ suggesting that the autoencoder can also be used to extract the critical temperature of the phase transition to an adequate precision.

References

SHOWING 1-10 OF 60 REFERENCES

Deep neural networks for direct, featureless learning through observation: The case of two-dimensional spin models.

The capability of a convolutional deep neural network in predicting the nearest-neighbor energy of the 4×4 Ising model and the ability of the neural network to recover the phase transition with equivalent accuracy to the numerically exact method are demonstrated.

Machine Learning of Explicit Order Parameters: From the Ising Model to SU(2) Lattice Gauge Theory

A procedure for reconstructing the decision function of an artificial neural network as a simple function of the input, provided the decisionfunction is sufficiently symmetric.

Machine Learning Topological Invariants with Neural Networks

After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data.

Learning phase transitions by confusion

This work proposes a neural-network approach to finding phase transitions, based on the performance of a neural network after it is trained with data that are deliberately labelled incorrectly, and paves the way to the development of a generic tool for identifying unexplored phase transitions.

Machine Learning Phases of Strongly Correlated Fermions

This work shows that a three dimensional convolutional network trained on auxiliary field configurations produced by quantum Monte Carlo simulations of the Hubbard model can correctly predict the magnetic phase diagram of the model at the average density of one (half filling).

Unsupervised machine learning account of magnetic transitions in the Hubbard model.

We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t-distributed stochastic neighboring ensemble (t-SNE), to reduce the dimensionality of,

Parameter diagnostics of phases and phase transition learning by neural networks

An analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer, and demonstrates how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks.

Quantum Loop Topography for Machine Learning.

This work introduces quantum loop topography (QLT): a procedure of constructing a multidimensional image from the "sample" Hamiltonian or wave function by evaluating two-point operators that form loops at independent Monte Carlo steps, and establishes the first case of obtaining a phase diagram with a topological quantum phase transition with machine learning.

Probing many-body localization with neural networks

We show that a simple artificial neural network trained on entanglement spectra of individual states of a many-body quantum system can be used to determine the transition between a many-body

Approximating quantum many-body wave functions using artificial neural networks

In this paper, we demonstrate the expressibility of artificial neural networks (ANNs) in quantum many-body physics by showing that a feed-forward neural network with a small number of hidden layers
...