Corpus ID: 182953054

Bayesian experimental design using regularized determinantal point processes

@article{Derezinski2020BayesianED,
  title={Bayesian experimental design using regularized determinantal point processes},
  author={Michal Derezinski and Feynman T. Liang and Michael W. Mahoney},
  journal={ArXiv},
  year={2020},
  volume={abs/1906.04133}
}
In experimental design, we are given $n$ vectors in $d$ dimensions, and our goal is to select $k\ll n$ of them to perform expensive measurements, e.g., to obtain labels/responses, for a linear regression task. Many statistical criteria have been proposed for choosing the optimal design, with popular choices including A- and D-optimality. If prior knowledge is given, typically in the form of a $d\times d$ precision matrix $\mathbf A$, then all of the criteria can be extended to incorporate that… Expand
High-Dimensional Experimental Design and Kernel Bandits
TLDR
This work proposes a rounding procedure that frees N of any dependence on the dimension d, while achieving nearly the same performance guarantees of existing rounding procedures. Expand
Sampling from a k-DPP without looking at all items
TLDR
An algorithm which adaptively builds a sufficiently large uniform sample of data that is then used to efficiently generate a smaller set of items, while ensuring that this set is drawn exactly from the target distribution defined on all $n$ items. Expand
Determinantal Point Processes in Randomized Numerical Linear Algebra
TLDR
An overview of this exciting new line of research, including brief introductions to RandNLA and DPPs, as well as applications of D PPs to classical linear algebra tasks such as least squares regression, low-rank approximation and the Nystrom method are provided. Expand
Optimal Batch Variance with Second-Order Marginals
Obtaining unbiased, low-variance estimates of the mean of a ground set of points by sampling a small subset of points is a crucial machine learning task, arising in stochastic learning procedures,Expand
Improved Guarantees and a Multiple-descent Curve for Column Subset Selection and the Nystrom Method (Extended Abstract)
The Column Subset Selection Problem (CSSP) and the Nyström method are among the leading tools for constructing interpretable low-rank approximations of large datasets by selecting a small butExpand
Nonparametric estimation of continuous DPPs with kernel methods
TLDR
It is shown that a restricted version of this maximum likelihood (MLE) problem falls within the scope of a recent representer theorem for nonnegative functions in an RKHS, which leads to a finite-dimensional problem, with strong statistical ties to the original MLE. Expand
Adaptive Sampling for Fast Constrained Maximization of Submodular Function
TLDR
This paper develops an algorithm with poly-logarithmic adaptivity for non-monotone submodular maximization under general side constraints, and yields an exponential speedup, with respect to the adaptivity, over any other known constant-factor approximation algorithm for this problem. Expand
Submodular + Concave
TLDR
This work provides a suite of Frank-Wolfe style algorithms, which, depending on the nature of the objective function, provide 1− 1/e, 1/ e, or 1/2 approximation guarantees, and applies these algorithms to various functions in the above class (DR-submodular + concave) in both constrained and unconstrained settings, and shows that these algorithms consistently outperform natural baselines. Expand
Improved guarantees and a multiple-descent curve for the Column Subset Selection Problem and the Nyström method
TLDR
Techniques are developed which exploit spectral properties of the data matrix to obtain improved approximation guarantees which go beyond the standard worst-case analysis and reveal an intriguing phenomenon: the approximation factor as a function of k may exhibit multiple peaks and valleys, which is called a multiple-descent curve. Expand
Determinantal Point Processes Implicitly Regularize Semi-parametric Regression Problems
TLDR
A novel projected Nystrom approximation is defined and used to derive a bound on the expected risk for the corresponding approximation of semi-parametric regression, which naturally extends similar results obtained for kernel ridge regression. Expand
...
1
2
...

References

SHOWING 1-10 OF 47 REFERENCES
Subsampling for Ridge Regression via Regularized Volume Sampling
TLDR
This work proposes a new procedure for selecting the subset of vectors, such that the ridge estimator obtained from that subset offers strong statistical guarantees in terms of the mean squared prediction error over the entire dataset of labeled vectors. Expand
Reverse iterative volume sampling for linear regression
TLDR
It is shown that a good approximate solution can be obtained from just dimension $d$ many responses by using a joint sampling technique called volume sampling, and the least squares solution obtained for the volume sampled subproblem is an unbiased estimator of optimal solution based on all n responses. Expand
Leveraged volume sampling for linear regression
TLDR
A new rescaled variant of volume sampling is developed which is based on a "determinantal rejection sampling" technique with potentially broader applications to determinantal point processes and improves on the best previously known sample size for an unbiased estimator, k=O(d^2/epsilon). Expand
Minimax experimental design: Bridging the gap between statistical and worst-case approaches to least squares regression
TLDR
This work motivates a new minimax-optimality criterion for experimental design which can be viewed as an extension of both A-optimal design and sampling for worst-case regression, and develops a new algorithm for a joint sampling distribution called volume sampling. Expand
Unbiased estimates for linear regression via volume sampling
TLDR
The methods are used to obtain an algorithm for volume sampling that is faster than state-of-the-art and for obtaining bounds for the total loss of the estimated least-squares solution on all labeled columns. Expand
Proportional Volume Sampling and Approximation Algorithms for A-Optimal Design
TLDR
Proportion volume sampling is introduced to obtain improved approximation algorithms for the A-optimal design problem by introducing the proportional volume sampling algorithm and shows that the problem is NP-hard to approximate within a fixed constant when $k=d$. Expand
Optimal Bayesian experimental design for models with intractable likelihoods using indirect inference applied to biological process models
TLDR
This paper proposes a novel solution using indirect inference (II), a well established method in the literature, and the Markov chain Monte Carlo algorithm of Muller et al. (2004), to handle complex design problems for models with intractable likelihoods on a continuous design space. Expand
Fast determinantal point processes via distortion-free intermediate sampling
TLDR
A new regularized determinantal point process (R-DPP) is introduced, which serves as an intermediate distribution in the sampling procedure by reducing the number of rows from $n to $\text{poly}(d)$, and does not distort the probabilities of the target sample. Expand
Faster Subset Selection for Matrices and Applications
TLDR
It is shown that the combinatorial problem of finding a low-stretch spanning tree in an undirected graph corresponds to subset selection, and the various implications of this reduction are discussed. Expand
Bayesian optimization for materials design
TLDR
Two Bayesian optimization methods are introduced: expected improvement, for design problems with noise-free evaluations; and the knowledge-gradient method, which generalizes expected improvement and may be used in design Problems with noisy evaluations, and enjoy one-step Bayes-optimality. Expand
...
1
2
3
4
5
...