Corpus ID: 237572077

Improving the Deconvolution of Spectrum at Finite Temperature via Neural Network

@inproceedings{Xie2021ImprovingTD,
  title={Improving the Deconvolution of Spectrum at Finite Temperature via Neural Network},
  author={Haidong Xie and Xueshuang Xiang},
  year={2021}
}
  • Haidong Xie, Xueshuang Xiang
  • Published 18 September 2021
  • Physics
In the study of condensed matter physics, spectral information plays an important role for understand the mechanism of materials. However, it is difficult to obtain the spectrum directly through experiments or simulation. For example, the spectral information deconvoluted by scanning tunneling spectroscopy suffers from the temperature broadening effect, which is ill-posed and makes the deconvolution result unstable. To solve this problem, the core idea of existing methods, such as the maximum… Expand

Figures from this paper

References

SHOWING 1-10 OF 40 REFERENCES
Artificial Neural Network Approach to the Analytic Continuation Problem.
TLDR
This work presents a general framework for building an artificial neural network (ANN) that solves the analytic continuation problem with a supervised learning approach and shows that the method can reach the same level of accuracy for low-noise input data, while performing significantly better when the noise strength increases. Expand
Analytic continuation via domain knowledge free machine learning
TLDR
The machine-learning-based approach to analytic continuation not only provides the more accurate spectrum than the conventional methods in terms of peak positions and heights, but is also more robust against the noise which is the required key feature for any continuation technique to be successful. Expand
Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks
TLDR
A very universal Frequency Principle (F-Principle) --- DNNs often fit target functions from low to high frequencies --- is demonstrated on high-dimensional benchmark datasets such as MNIST/CIFAR10 and deep neural networks such as VGG16. Expand
Understanding training and generalization in deep learning by Fourier analysis
TLDR
This work studies DNN training by Fourier analysis to explain why Deep Neural Networks often achieve remarkably low generalization error and suggests small initialization leads to good generalization ability of DNN while preserving the DNN's ability to fit any function. Expand
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
TLDR
This work investigates the cause for this generalization drop in the large-batch regime and presents numerical evidence that supports the view that large- batch methods tend to converge to sharp minimizers of the training and testing functions - and as is well known, sharp minima lead to poorer generalization. Expand
Implementation of the maximum entropy method for analytic continuation
TLDR
Maxent is a tool for performing analytic continuation of spectral functions using the maximum entropy method and implements a range of bosonic, fermionic and generalized kernels for normal and anomalous Green’s functions, self-energies, and two-particle response functions. Expand
Frequency Principle in Deep Learning Beyond Gradient-descent-based Training
TLDR
Empirical studies show the universality of the F-Principle in the training process of DNNs with nongradient-descent-based training, and algorithms without gradient information, such as Powell’s method and Particle Swarm Optimization. Expand
A new approach to solve inverse problems: Combination of model-based solving and example-based learning
Inverse problem, which is one of the basic forms of mathematical problems, exists in the science, engineering and technology extensively. Traditional inverse problems are resolved through solvingExpand
Theory of the Frequency Principle for General Deep Neural Networks
TLDR
This work rigorously investigate the F-Principle for the training dynamics of a general DNN at three stages: initial stage, intermediate stage, and final stage and results are general in the sense that they work for multilayer networks with general activation functions, population densities of data, and a large class of loss functions. Expand
Maximum entropy formalism for the analytic continuation of matrix-valued Green's functions
We present a generalization of the maximum entropy method to the analytic continuation of matrix-valued Green's functions. To treat off-diagonal elements correctly based on Bayesian probabilityExpand
...
1
2
3
4
...