A new scope of penalized empirical likelihood with high-dimensional estimating equations

@article{Chang2018ANS,
  title={A new scope of penalized empirical likelihood with high-dimensional estimating equations},
  author={Jinyuan Chang and Cheng Yong Tang and Tong Tong Wu},
  journal={The Annals of Statistics},
  year={2018}
}
Statistical methods with empirical likelihood (EL) are appealing and effective especially in conjunction with estimating equations through which useful data information can be adaptively and flexibly incorporated. It is also known in the literature that EL approaches encounter difficulties when dealing with problems having high-dimensional model parameters and estimating equations. To overcome the challenges, we begin our study with a careful investigation on high-dimensional EL from a new… 

Tables from this paper

A Robust Consistent Information Criterion for Model Selection based on Empirical Likelihood
TLDR
A robust and consistent model selection criterion based upon the empirical likelihood function which is data-driven is proposed, which avoids potential computational convergence issues and allows versatile applications, such as generalized linear models, generalized estimating equations, penalized regressions and so on.
Penalized Jackknife Empirical Likelihood in High Dimensions
TLDR
A penalized JEL method is proposed which preserves the main advantages of JEL and leads to reliable variable selection based on the estimating equations with U -statistic structure in the high-dimensional setting and establishes the asymptotic theory and oracle property for the JEL.
Penalized generalized empirical likelihood with a diverging number of general estimating equations for censored data
This article considers simultaneous variable selection and parameter estimation as well as hypothesis testing in censored regression models with unspecified parametric likelihood. For the problem, we
Tuning parameter selection for penalised empirical likelihood with a diverging number of parameters
TLDR
A generalised information criterion (GIC) for the penalised empirical likelihood in the linear regression case is proposed and it is shown that the tuning parameter selected by the GIC yields the true model consistently even when the number of predictors diverges to infinity with the sample size.
High-dimensional statistical inferences with over-identification: confidence set estimation and specification test
TLDR
This paper proposes to construct a new set of estimating functions such that the impact from estimating the nuisance parameters becomes asymptotically negligible, and proposes a test statistic as the maximum of the marginal EL ratios respectively calculated from individual components of the high-dimensional moment conditions.
Regularization Parameter Selection for Penalized Empirical Likelihood Estimator
Penalized estimation is a useful technique for variable selection when the number of candidate variables is large. A crucial issue in penalized estimation is the selection of the regularization
Penalized empirical likelihood for partially linear errors-in-variables models
In this paper, we study penalized empirical likelihood for parameter estimation and variable selection in partially linear models with measurement errors in possibly all the variables. By using
On the Convergence Rate of the SCAD-Penalized Empirical Likelihood Estimator
TLDR
The main result is that the SCAD-penalized empirical likelihood estimator is consistent under a reasonable condition on the regularization parameter, and the consistency rate is better than the existing ones.
...
1
2
3
...

References

SHOWING 1-10 OF 14 REFERENCES
Penalized high-dimensional empirical likelihood
We propose penalized empirical likelihood for parameter estimation and variable selection for problems with diverging numbers of parameters. Our results are demonstrated for estimating the mean
Econometric Estimation with High-Dimensional Moment Equalities
Shrinkage tuning parameter selection with a diverging number of parameters
Summary.  Contemporary statistical research frequently deals with problems involving a diverging number of parameters. For those problems, various shrinkage methods (e.g. the lasso and smoothly
Nested coordinate descent algorithms for empirical likelihood
TLDR
This paper tries to tackle the computation problems, which are considered difficult by practitioners, by introducing a nested coordinate descent algorithm and one modified version to EL and shows that the nested coordinates descent algorithms can be conveniently and stably applied in general EL problems.
Coordinate descent algorithms for lasso penalized regression
TLDR
This paper tests two exceptionally fast algorithms for estimating regression coefficients with a lasso penalty and proves that a greedy form of the l 2 algorithm converges to the minimum value of the objective function.
Empirical likelihood on the full parameter space
We extend the empirical likelihood of Owen [Ann. Statist. 18 (1990) 90-120] by partitioning its domain into the collection of its contours and mapping the contours through a continuous sequence of
On Model Selection Consistency of Lasso
TLDR
It is proved that a single condition, which is called the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large.
Nearly unbiased variable selection under minimax concave penalty
TLDR
It is proved that at a universal penalty level, the MC+ has high probability of matching the signs of the unknowns, and thus correct selection, without assuming the strong irrepresentable condition required by the LASSO.
Extended empirical likelihood for estimating equations
We derive an extended empirical likelihood for parameters defined by estimating equations which generalizes the original empirical likelihood to the full parameter space. Under mild conditions, the
Self-normalized Cramér-type large deviations for independent random variables
Let X 1 , X 2 ,... be independent random variables with zero means and finite variances. It is well known that a finite exponential moment assumption is necessary for a Cramer-type large deviation
...
1
2
...