Corpus ID: 88513233

Semiparametric Gaussian copula classification

@article{Zhao2014SemiparametricGC,
  title={Semiparametric Gaussian copula classification},
  author={Yue Zhao and Marten H. Wegkamp},
  journal={arXiv: Statistics Theory},
  year={2014}
}
This paper studies the binary classification of two distributions with the same Gaussian copula in high dimensions. Under this semiparametric Gaussian copula setting, we derive an accurate semiparametric estimator of the log density ratio, which leads to our empirical decision rule and a bound on its associated excess risk. Our estimation procedure takes advantage of the potential sparsity as well as the low noise condition in the problem, which allows us to achieve faster convergence rate of… Expand
High-Dimensional Gaussian Copula Regression: Adaptive Estimation and Statistical Inference
We develop adaptive estimation and inference methods for high-dimensional Gaussian copula regression that achieve the same performance without the knowledge of the marginal transformations as thatExpand
A convex optimization approach to high-dimensional sparse quadratic discriminant analysis
In this paper, we study high-dimensional sparse Quadratic Discriminant Analysis (QDA) and aim to establish the optimal convergence rates for the classification error. Minimax lower bounds areExpand

References

SHOWING 1-10 OF 30 REFERENCES
Semiparametric Sparse Discriminant Analysis in Ultra-High Dimensions
In recent years, a considerable amount of work has been devoted to generalizing linear discriminant analysis to overcome its incompetence for high-dimensional classification (Witten & TibshiraniExpand
High Dimensional Semiparametric Gaussian Copula Graphical Models
TLDR
It is proved that the nonparanormal skeptic achieves the optimal parametric rates of convergence for both graph recovery and parameter estimation, and this result suggests that the NonParanormal graphical models can be used as a safe replacement of the popular Gaussian graphical models, even when the data are truly Gaussian. Expand
Discriminant analysis through a semiparametric model
We consider a semiparametric generalisation of normal-theory discriminant analysis. The semiparametric model assumes that, after unspecified univariate monotone transformations, the classExpand
Asymptotic normality and optimalities in estimation of large Gaussian graphical models
The Gaussian graphical model, a popular paradigm for studying relationship among variables in a wide range of applications, has attracted great attention in recent years. This paper considers aExpand
CODA: high dimensional copula discriminant analysis
TLDR
In high dimensional settings, it is proved that the sparsity pattern of the discriminant features can be consistently recovered with the parametric rate, and the expected misclassification error is consistent to the Bayes risk. Expand
Multivariate Analysis of Nonparametric Estimates of Large Correlation Matrices
We study concentration in spectral norm of nonparametric estimates of correlation matrices. We work within the confine of a Gaussian copula model. Two nonparametric estimators of the correlationExpand
HIGH-DIMENSIONAL COVARIANCE ESTIMATION BY MINIMIZING l1-PENALIZED LOG-DETERMINANT DIVERGENCE BY PRADEEP RAVIKUMAR
Given i.i.d. observations of a random vector X ∈ R, we study the problem of estimating both its covariance matrix Σ∗, and its inverse covariance or concentration matrixΘ∗ = (Σ). We estimateΘ∗ byExpand
Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas
We study the adaptive estimation of copula correlation matrix $\Sigma$ for the semi-parametric elliptical copula model. In this context, the correlations are connected to Kendall's tau through a sineExpand
Optimal Feature Selection in High-Dimensional Discriminant Analysis
  • M. Kolar, Han Liu
  • Mathematics, Medicine
  • IEEE Transactions on Information Theory
  • 2015
TLDR
This paper establishes rates of convergence that are significantly faster than the best known results and admit an optimal scaling of the sample size n, dimensionality p, and sparsity level s in the high-dimensional setting. Expand
Fast learning rates for plug-in classifiers
It has been recently shown that, under the margin (or low noise) assumption, there exist classifiers attaining fast rates of convergence of the excess Bayes risk, that is, rates faster than n -1/2 .Expand
...
1
2
3
...