• Corpus ID: 240420230

Flexible Regularized Estimating Equations: Some New Perspectives

@inproceedings{Yang2021FlexibleRE,
  title={Flexible Regularized Estimating Equations: Some New Perspectives},
  author={Yi Yang and Yuwen Gu and Yue Zhao and Jun Fan},
  year={2021}
}
  • Yi Yang, Yuwen Gu, +1 author Jun Fan
  • Published 21 October 2021
  • Mathematics
In this note, we make some observations about the equivalences between regularized estimating equations, fixed-point problems and variational inequalities. A summary of our findings is given below. • A regularized estimating equation is equivalent to a fixed-point problem, specified by the proximal operator of the corresponding penalty. • A regularized estimating equation is equivalent to a generalized variational inequality. • Both equivalences extend to any estimating equations and any… 

References

SHOWING 1-10 OF 21 REFERENCES
Golden ratio algorithms for variational inequalities
TLDR
A fully explicit algorithm for monotone variational inequalities that uses variable stepsizes that are computed using two previous iterates as an approximation of the local Lipschitz constant without running a linesearch.
A Primer on Monotone Operator Methods
This tutorial paper presents the basic notation and results of monotone operators and operator splitting methods, with a focus on convex optimization. A very wide variety of algorithms, ranging from
Proximal Algorithms
TLDR
The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.
Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally
A Sparse-Group Lasso
For high-dimensional supervised learning problems, often using problem-specific assumptions can lead to greater accuracy. For problems with grouped covariates, which are believed to have sparse
Penalized estimating equations.
TLDR
It is demonstrated, with a simulation and an application, that the penalized GEE potentially improves the performance of the GEE estimator, and enjoys the same properties as linear penalty models.
Regression Shrinkage and Selection via the Lasso
SUMMARY We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a
Mean value methods in iteration
Due largely to the works of Cesaro, Fejer, and Toeplitz, mean value methods have become famous in the summation of divergent series. The purpose of this paper is to show that the same methods can
Regularization and variable selection via the elastic net
Summary. We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a
Penalized Estimating Functions and Variable Selection in Semiparametric Regression Models
TLDR
A general strategy for variable selection in semiparametric regression models by penalizing appropriate estimating functions by establishing a general asymptotic theory for penalized estimating functions and present suitable numerical algorithms to implement the proposed estimators.
...
1
2
3
...