• Corpus ID: 222142174

Improving Nonparametric Density Estimation with Tensor Decompositions

@article{Vandermeulen2020ImprovingND,
  title={Improving Nonparametric Density Estimation with Tensor Decompositions},
  author={Robert A. Vandermeulen},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.02425}
}
While nonparametric density estimators often perform well on low dimensional data, their performance can suffer when applied to higher dimensional data, owing presumably to the curse of dimensionality. One technique for avoiding this is to assume no dependence between features and that the data are sampled from a separable density. This allows one to estimate each marginal distribution independently thereby avoiding the slow rates associated with estimating the full joint density. This is a… 

Tables from this paper

Consistent Estimation of Identifiable Nonparametric Mixture Models from Grouped Observations
TLDR
This work proposes an algorithm that consistently estimates any identifiable mixture model from grouped observations, and the approach is shown to outperform existing methods, especially when mixture components overlap significantly.

References

SHOWING 1-10 OF 39 REFERENCES
Tensor decompositions for learning latent variable models
TLDR
A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin's perturbation theorem for the singular vectors of matrices, and implies a robust and computationally tractable estimation approach for several popular latent variable models.
Nonparametric Estimation of Multi-View Latent Variable Models
TLDR
A kernel method for learning multi-view latent variable models, allowing each mixture component to be nonparametric, and then the latent parameters are recovered using a robust tensor power method.
Robust Kernel Density Estimation by Scaling and Projection in Hilbert Space
TLDR
This paper presents a robust version of the popular kernel density estimator (KDE), and demonstrates the robustness of the SPKDE with numerical experiments and a consistency result which shows that asymptotically theSPKDE recovers the uncontaminated density under sufficient conditions on the contamination.
Estimation of (near) low-rank matrices with noise and high-dimensional scaling
TLDR
Simulations show excellent agreement with the high-dimensional scaling of the error predicted by the theory, and illustrate their consequences for a number of specific learning models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes, and recovery of low- rank matrices from random projections.
Uniform Convergence Rates for Kernel Density Estimation
TLDR
Finite-sample high-probability density estimation bounds for multivariate KDE are derived under mild density assumptions which hold uniformly in x ∈ R and bandwidth matrices and uniform convergence results for local intrinsic dimension estimation are given.
Forest Density Estimation
TLDR
It is proved that finding a maximum weight spanning forest with restricted tree size is NP-hard, and an approximation algorithm is developed for this problem.
Uniform Convergence Rate of the Kernel Density Estimator Adaptive to Intrinsic Volume Dimension
TLDR
The volume dimension is proposed, called the volume dimension, to measure the intrinsic dimension of the support of a probability distribution based on the rates of decay of the probability of vanishing Euclidean balls and is useful for problems in geometric inference and topological data analysis.
Efficient projections onto the l1-ball for learning in high dimensions
TLDR
Efficient algorithms for projecting a vector onto the l1-ball are described and variants of stochastic gradient projection methods augmented with these efficient projection procedures outperform interior point methods, which are considered state-of-the-art optimization techniques.
Consistency of Robust Kernel Density Estimators
TLDR
This paper establishes asymptotic L 1 consistency of the RKDE for a class of losses and shows that the RkDE converges with the same rate on bandwidth required for the traditional KDE and presents a novel proof of the consistency.
Identifiability of parameters in latent structure models with many observed variables
While hidden class models of various types arise in many statistical applications, it is often difficult to establish the identifiability of their parameters. Focusing on models in which there is
...
1
2
3
4
...