Effectively Subsampled Quadratures for Least Squares Polynomial Approximations

  title={Effectively Subsampled Quadratures for Least Squares Polynomial Approximations},
  author={Pranay Seshadri and Akil C. Narayan and Sankaran Mahadevan},
  journal={SIAM/ASA J. Uncertain. Quantification},
This paper proposes a new deterministic sampling strategy for constructing polynomial chaos approximations for expensive physics simulation models. The proposed approach, effectively subsampled quadratures involves sparsely subsampling an existing tensor grid using QR column pivoting. For polynomial interpolation using hyperbolic or total order sets, we then solve the following square least squares problem. For polynomial approximation, we use a column pruning heuristic that removes columns… 

Figures and Tables from this paper

Stabilizing discrete empirical interpolation via randomized and deterministic oversampling
The numerical results demonstrate on synthetic and diffusion-reaction problems that randomized and deterministic oversampling with the approach stabilizes empirical interpolation in the presence of noise.
Compressive Hermite Interpolation: Sparse, High-Dimensional Approximation from Gradient-Augmented Measurements
  • B. AdcockYi Sui
  • Computer Science, Mathematics
    Constructive Approximation
  • 2019
This work considers the sparse polynomial approximation of a multivariate function on a tensor product domain from samples of both the function and its gradient, and shows that for the same asymptotic sample complexity, gradient-augmented measurements achieve an approximation error bound in a stronger Sobolev norm, as opposed to the $$L^2$$L2-norm in the unaugmenting case.
On efficient algorithms for computing near-best polynomial approximations to high-dimensional, Hilbert-valued functions from limited samples
A novel restarted version of the primal-dual iteration for solving weighted (cid:96) 1 -minimization problems in Hilbert spaces and establishes error bounds for these algorithms which provably achieve the same algebraic or exponential rates as those of the best s -term approximation.
L1-based reduced over collocation and hyper reduction for steady state and time-dependent nonlinear equations
This paper augment and extend the EIM approach as a direct solver, as opposed to an assistant, for solving nonlinear pPDEs on the reduced level, and the resulting method, called Reduced Over-Collocation method (ROC), is stable and capable of avoiding the efficiency degradation inherent to a traditional application of EIM.
Extremum Sensitivity Analysis with Least Squares Polynomials and their Ridges
This paper discusses two heuristics for evaluating input sensitivities when constrained near output extrema: skewness-based sensitivity indices and variance reduction indices based on Monte Carlo filtering, and provides algorithms that implement the ideas discussed.
Extremum Global Sensitivity Analysis with Least Squares Polynomials and their Ridges
This paper discusses two heuristics for evaluating input sensitivities when constrained near output extrema: skewness-based sensitivity indices and variance reduction indices based on Monte Carlo filtering and provides algorithms that implement the ideas discussed.
Sparse Polynomial Chaos Expansions: Literature Survey and Benchmark
It is found that the choice of sparse regression solver and sampling scheme for the computation of a sparse PCE surrogate can make a significant difference, of up to several orders of magnitude in the resulting mean-square error.
Sparse polynomial chaos expansions via compressed sensing and D-optimal design
Construction and application of provable positive and exact cubature formulas
This work shows how the method of least squares can be used to derive provable positive and exact formulas in a general multi-dimensional setting and proves that the resulting least squares cubature formulas are ensured to bepositive and exact if a sufficiently large number of equidistributed data points is used.
Polynomial chaos expansions for dependent random variables


A Christoffel function weighted least squares algorithm for collocation approximations
This work proposes an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation frame- work, and presents theoretical analysis to motivate the algorithm, and numerical results that show the method is superior to standard Monte Carlo methods in many situations of interest.
Weighted discrete least-squares polynomial approximation using randomized quadratures
Sparse Pseudospectral Approximation Method
Subsampled Gauss Quadrature Nodes for Estimating Polynomial Chaos Expansions
This paper describes a new estimation procedure for the Legendre polynomial expansion, within the compressed sensing formalism, by randomly sampling from the set of points defined by Gauss quadrature rules, and compares its real-world performance to other sampling schemes in the literature.
A Cardinal Function Algorithm for Computing Multivariate Quadrature Points
A new algorithm for numerically computing quadrature formulas for arbitrary domains which exactly integrate a given polynomial space is presented, which relies on the construction of cardinal functions and thus requires that the number of quadratures points be equal to the dimension of a prescribed lower dimensional polynometric space.
Numerical integration using sparse grids
The usage of extended Gauss (Patterson) quadrature formulas as the one‐dimensional basis of the construction is suggested and their superiority in comparison to previously used sparse grid approaches based on the trapezoidal, Clenshaw–Curtis and Gauss rules is shown.
A weighted l1-minimization approach for sparse polynomial chaos expansions
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
Approximation of Quantities of Interest in Stochastic PDEs by the Random Discrete L2 Projection on Polynomial Spaces
This work considers the random discrete L^2 projection on polynomial spaces (hereafter RDP) for the approximation of scalar quantities of interest (QOIs) related to the solution of a partial differential equation model with random input parameters and shows that the RDP technique is well suited to QOIs that depend smoothly on a moderate number of random parameters.
On the Stability and Accuracy of Least Squares Approximations
This work provides a criterion on m that describes the needed amount of regularization to ensure that the least squares method is stable and that its accuracy, measured in L2(X,ρX), is comparable to the best approximation error of f by elements from Vm.