# A new scope of penalized empirical likelihood with high-dimensional estimating equations

@article{Chang2018ANS, title={A new scope of penalized empirical likelihood with high-dimensional estimating equations}, author={Jinyuan Chang and Cheng Yong Tang and Tong Tong Wu}, journal={The Annals of Statistics}, year={2018} }

Statistical methods with empirical likelihood (EL) are appealing and effective especially in conjunction with estimating equations through which useful data information can be adaptively and flexibly incorporated. It is also known in the literature that EL approaches encounter difficulties when dealing with problems having high-dimensional model parameters and estimating equations. To overcome the challenges, we begin our study with a careful investigation on high-dimensional EL from a new…

## 29 Citations

A Robust Consistent Information Criterion for Model Selection based on Empirical Likelihood

- Computer Science
- 2020

A robust and consistent model selection criterion based upon the empirical likelihood function which is data-driven is proposed, which avoids potential computational convergence issues and allows versatile applications, such as generalized linear models, generalized estimating equations, penalized regressions and so on.

Penalized Jackknife Empirical Likelihood in High Dimensions

- Mathematics, BusinessStatistica Sinica
- 2021

A penalized JEL method is proposed which preserves the main advantages of JEL and leads to reliable variable selection based on the estimating equations with U -statistic structure in the high-dimensional setting and establishes the asymptotic theory and oracle property for the JEL.

Penalized generalized empirical likelihood with a diverging number of general estimating equations for censored data

- Mathematics
- 2020

This article considers simultaneous variable selection and parameter estimation as well as hypothesis testing in censored regression models with unspecified parametric likelihood. For the problem, we…

Tuning parameter selection for penalised empirical likelihood with a diverging number of parameters

- Computer Science, Mathematics
- 2020

A generalised information criterion (GIC) for the penalised empirical likelihood in the linear regression case is proposed and it is shown that the tuning parameter selected by the GIC yields the true model consistently even when the number of predictors diverges to infinity with the sample size.

Penalized generalized empirical likelihood in high-dimensional weakly dependent data

- Mathematics, Computer ScienceJ. Multivar. Anal.
- 2019

High-dimensional statistical inferences with over-identification: confidence set estimation and specification test

- Computer Science, Mathematics
- 2018

This paper proposes to construct a new set of estimating functions such that the impact from estimating the nuisance parameters becomes asymptotically negligible, and proposes a test statistic as the maximum of the marginal EL ratios respectively calculated from individual components of the high-dimensional moment conditions.

Regularization Parameter Selection for Penalized Empirical Likelihood Estimator

- Mathematics
- 2018

Penalized estimation is a useful technique for variable selection when the number of candidate variables is large. A crucial issue in penalized estimation is the selection of the regularization…

Penalized empirical likelihood for partially linear errors-in-variables models

- Mathematics
- 2020

In this paper, we study penalized empirical likelihood for parameter estimation and variable selection in partially linear models with measurement errors in possibly all the variables. By using…

Regularization parameter selection for penalized empirical likelihood estimator

- MathematicsEconomics Letters
- 2019

On the Convergence Rate of the SCAD-Penalized Empirical Likelihood Estimator

- Mathematics, Computer Science
- 2017

The main result is that the SCAD-penalized empirical likelihood estimator is consistent under a reasonable condition on the regularization parameter, and the consistency rate is better than the existing ones.

## References

SHOWING 1-10 OF 14 REFERENCES

Penalized high-dimensional empirical likelihood

- Mathematics
- 2010

We propose penalized empirical likelihood for parameter estimation and variable selection for problems with diverging numbers of parameters. Our results are demonstrated for estimating the mean…

Econometric Estimation with High-Dimensional Moment Equalities

- Mathematics, Economics
- 2016

Shrinkage tuning parameter selection with a diverging number of parameters

- Mathematics
- 2008

Summary. Contemporary statistical research frequently deals with problems involving a diverging number of parameters. For those problems, various shrinkage methods (e.g. the lasso and smoothly…

Nested coordinate descent algorithms for empirical likelihood

- Computer Science
- 2014

This paper tries to tackle the computation problems, which are considered difficult by practitioners, by introducing a nested coordinate descent algorithm and one modified version to EL and shows that the nested coordinates descent algorithms can be conveniently and stably applied in general EL problems.

Coordinate descent algorithms for lasso penalized regression

- Computer Science
- 2008

This paper tests two exceptionally fast algorithms for estimating regression coefficients with a lasso penalty and proves that a greedy form of the l 2 algorithm converges to the minimum value of the objective function.

Empirical likelihood on the full parameter space

- Mathematics
- 2013

We extend the empirical likelihood of Owen [Ann. Statist. 18 (1990) 90-120] by partitioning its domain into the collection of its contours and mapping the contours through a continuous sequence of…

On Model Selection Consistency of Lasso

- Computer ScienceJ. Mach. Learn. Res.
- 2006

It is proved that a single condition, which is called the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large.

Nearly unbiased variable selection under minimax concave penalty

- Computer Science, Mathematics
- 2010

It is proved that at a universal penalty level, the MC+ has high probability of matching the signs of the unknowns, and thus correct selection, without assuming the strong irrepresentable condition required by the LASSO.

Extended empirical likelihood for estimating equations

- Mathematics, Economics
- 2014

We derive an extended empirical likelihood for parameters defined by estimating equations which generalizes the original empirical likelihood to the full parameter space. Under mild conditions, the…

Self-normalized Cramér-type large deviations for independent random variables

- Mathematics
- 2003

Let X 1 , X 2 ,... be independent random variables with zero means and finite variances. It is well known that a finite exponential moment assumption is necessary for a Cramer-type large deviation…