• Corpus ID: 244908389

On the Numerical Approximation of the Karhunen-Loève Expansion for Random Fields with Random Discrete Data

@article{Griebel2021OnTN,
  title={On the Numerical Approximation of the Karhunen-Lo{\`e}ve Expansion for Random Fields with Random Discrete Data},
  author={Michael Griebel and Guanglian Li and Christian Rieger},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.02526}
}
Many physical and mathematical models involve random fields in their input data. Examples are ordinary differential equations, partial differential equations and integro–differential equations with uncertainties in the coefficient functions described by random fields. They also play a dominant role in problems in machine learning. In this article, we do not assume to have knowledge of the moments or expansion terms of the random fields but we instead have only given discretized samples for them… 

A Dimension-adaptive Combination Technique for Uncertainty Quantification

An adaptive algorithm is presented for the computation of quantities of interest involving the solution of a stochastic elliptic PDE where the diffusion coefficient is parametrized by means of a Karhunen-Lo`eve expansion by proposing a dimension-adaptive combination technique.

References

SHOWING 1-10 OF 21 REFERENCES

Fully Discrete Approximation of Parametric and Stochastic Elliptic PDEs

This work studies the combined spatial and parametric approximability for elliptic PDEs with affine or lognormal parametrizations of the diffusion coefficients and corresponding Taylor, Jacobi, and Hermite expansions, to obtain fully discrete approximations.

Convergence Types and Rates in Generic Karhunen-Loève Expansions with Applications to Sample Path Properties

We establish a Karhunen-Loève expansion for generic centered, second order stochastic processes, which does not rely on topological assumptions. We further investigate in which norms the expansion

Regularized estimation of large covariance matrices

If the population covariance is embeddable in that model and well-conditioned then the banded approximations produce consistent estimates of the eigenvalues and associated eigenvectors of the covariance matrix.

Robust Eigenvalue Computation for Smoothing Operators

  • R. Todor
  • Mathematics
    SIAM J. Numer. Anal.
  • 2006
Robust quasi-relative Galerkin discretization error estimates are derived for the eigenvalue problem associated to a nonnegative compact operator $\cal K$ acting in a Hilbert space and applied to the case of an integral operator with (piecewise) smooth kernel $K$ on a bounded domain and in the context of the $h$ finite element method.

Singular value decomposition versus sparse grids: refined complexity estimates

It turns out that, in this situation, the approximation by the sparse grid is always equal or superior to the approximation of functions from generalized isotropic and anisotropic Sobolev spaces.

Optimal rates of convergence for covariance matrix estimation

Covariance matrix plays a central role in multivariate statistical analysis. Significant advances have been made recently on developing both theory and methodology for estimating large covariance

Estimating structured high-dimensional covariance and precision matrices: Optimal rates and adaptive estimation

Minimax rates of convergence for estimating several classes of structured covariance and precision matrices, including bandable, Toeplitz, and sparse covariance matrices as well as sparse precisionMatrices, are given under the spectral norm loss.

Random perturbation of low rank matrices: Improving classical bounds

A useful variant of the Davis--Kahan theorem for statisticians

The Davis–Kahan theorem is used in the analysis of many statistical procedures to bound the distance between subspaces spanned by population eigenvectors and their sample versions. It relies on an

Computing Invariant Subspaces of a General Matrix when the Eigensystem is Poorly Conditioned

This paper defines a class of matrices where this is true, and proposes a technique for calculating bases for these invariant subspaces, and shows that for this class the technique provides basis vectors which are accurate and span the subspaced well.