Corpus ID: 236469109

Learned Optimizers for Analytic Continuation

  title={Learned Optimizers for Analytic Continuation},
  author={Dongchen Huang and Yi-feng Yang},
  • Dongchen Huang, Yi-feng Yang
  • Published 2021
  • Computer Science, Physics
  • ArXiv
Traditional maximum entropy and sparsity-based algorithms for analytic continuation often suffer from the ill-posed kernel matrix or demand tremendous computation time for parameter tuning. Here we propose a neural network method by convex optimization and replace the ill-posed inverse problem by a sequence of well-conditioned surrogate problems. After training, the learned optimizers are able to give a solution of high quality with low time cost and achieve higher parameter efficiency than… Expand

Figures from this paper


Artificial Neural Network Approach to the Analytic Continuation Problem.
This work presents a general framework for building an artificial neural network (ANN) that solves the analytic continuation problem with a supervised learning approach and shows that the method can reach the same level of accuracy for low-noise input data, while performing significantly better when the noise strength increases. Expand
Analytic continuation via domain knowledge free machine learning
The machine-learning-based approach to analytic continuation not only provides the more accurate spectrum than the conventional methods in terms of peak positions and heights, but is also more robust against the noise which is the required key feature for any continuation technique to be successful. Expand
Fast Solution of -Norm Minimization Problems When the Solution May Be Sparse
The minimum -norm solution to an underdetermined system of linear equations is often, remarkably, also the sparsest solution to that system. This sparsity-seeking property is of interest in signalExpand
An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint
We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary preassigned orthonormal basis. We prove that replacing the usual quadratic regularizingExpand
Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation.
This work analyzes all aspects of the implementation that are critical for accuracy and speed and presents a highly optimized approach to maximum entropy, which covers most typical cases for fermions, bosons, and response functions. Expand
A numerical algorithm for the solution of the Classic Maximum Entropy problem is presented, for use when the data are considerably oversampled, so that the amount of independent information theyExpand
Sparse modeling approach to analytical continuation of imaginary-time quantum Monte Carlo data.
A data-science approach to solving the ill-conditioned inverse problem for analytical continuation by means of a modern regularization technique, which eliminates redundant degrees of freedom that essentially carry the noise, leaving only relevant information unaffected by the noise. Expand
Visualizing the Loss Landscape of Neural Nets
This paper introduces a simple "filter normalization" method that helps to visualize loss function curvature and make meaningful side-by-side comparisons between loss functions, and explores how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers. Expand
Constrained sampling method for analytic continuation.
  • A. Sandvik
  • Mathematics, Physics
  • Physical review. E
  • 2016
A method for analytic continuation of imaginary-time correlation functions to real-frequency spectral functions is proposed, and very good agreement is found with Bethe ansatz results in the ground state and with exact diagonalization of small systems at elevated temperatures. Expand
Learning Fast Approximations of Sparse Coding
Two versions of a very fast algorithm that produces approximate estimates of the sparse code that can be used to compute good visual features, or to initialize exact iterative algorithms are proposed. Expand