Share This Author
Sure independence screening for ultrahigh dimensional feature space Discussion
A useful variant of the Davis--Kahan theorem for statisticians
The Davis–Kahan theorem is used in the analysis of many statistical procedures to bound the distance between subspaces spanned by population eigenvectors and their sample versions. It relies on an…
Ultrahigh Dimensional Feature Selection: Beyond The Linear Model
- Jianqing Fan, R. Samworth, Yichao Wu
- Computer ScienceJournal of machine learning research
- 1 December 2009
This paper extends ISIS, without explicit definition of residuals, to a general pseudo-likelihood framework, which includes generalized linear models as a special case and improves ISIS by allowing feature deletion in the iterative process.
Variable selection with error control: another look at stability selection
Summary. Stability selection was recently introduced by Meinshausen and Bühlmann as a very general technique designed to improve the performance of a variable selection algorithm. It is based on…
Maximum likelihood estimation of a multi‐dimensional log‐concave density
Although the existence proof is non‐constructive, it can reformulate the issue of computing in terms of a non‐differentiable convex optimization problem, and thus combine techniques of computational geometry with Shor's r‐algorithm to produce a sequence that converges to .
High dimensional change point estimation via sparse projection
A two‐stage procedure called inspect is proposed for estimation of the change points, arguing that a good projection direction can be obtained as the leading left singular vector of the matrix that solves a convex optimization problem derived from the cumulative sum transformation of the time series.
Efficient multivariate entropy estimation via $k$-nearest neighbour distances
This paper seeks entropy estimators that are efficient and achieve the local asymptotic minimax lower bound with respect to squared error loss, and proposes a new weighted averages of the estimators originally proposed by Kozachenko and Leonenko (1987).
Theoretical properties of the log-concave maximum likelihood estimator of a multidimensional density
We present theoretical properties of the log-concave maximum likelihood estimator of a density based on an independent and identically distributed sample in R d . Our study covers both the case where…
Optimal weighted nearest neighbour classifiers
- R. Samworth
- Computer Science, Mathematics
- 30 January 2011
An asymptotic expansion for the excess risk (regret) of a weighted nearest-neighbour classifier is derived, and it is argued that improvements in the rate of convergence are possible under stronger smoothness assumptions, provided the authors allow negative weights.
APPROXIMATION BY LOG-CONCAVE DISTRIBUTIONS, WITH APPLICATIONS TO REGRESSION
It is shown that an approximation of arbitrary distributions P on d-dimensional space by distributions with log-concave density exists if and only if P has finite first moments and is not supported by some hyperplane, and that this approximation depends continuously on P with respect to Mallows distance D 1.