# Penalized polytomous ordinal logistic regression using cumulative logits. Application to network inference of zero-inflated variables

@article{Deveau2018PenalizedPO, title={Penalized polytomous ordinal logistic regression using cumulative logits. Application to network inference of zero-inflated variables}, author={Aur'elie Deveau and Anne G'egout-Petit and Cl'emence Karmann}, journal={arXiv: Applications}, year={2018} }

We consider the problem of variable selection when the response is ordinal , that is an ordered categorical variable. In particular, we are interested in selecting quantitative explanatory variables linked with the ordinal response variable and we want to determine which predictors are relevant. In this framework, we choose to use the polytomous ordinal logistic regression model using cumulative logits which generalizes the logistic regression. We then introduce the Lasso estimation of the…

## References

SHOWING 1-10 OF 30 REFERENCES

### Genome-wide association analysis by lasso penalized logistic regression

- Computer ScienceBioinform.
- 2009

The performance of lasso penalized logistic regression in case-control disease gene mapping with a large number of SNPs (single nucleotide polymorphisms) predictors is evaluated and coeliac disease results replicate the previous SNP results and shed light on possible interactions among the SNPs.

### The analysis of ordered categorical data: An overview and a survey of recent developments

- Mathematics
- 2005

This article review methodologies used for analyzing ordered categorical (ordinal) response variables. We begin by surveying models for data with a single ordinal response variable. We also survey…

### Regression Models for Ordinal Data

- Mathematics
- 1980

SUMMARY A general class of regression models for ordinal data is developed and discussed. These models utilize the ordinal nature of the data by describing various modes of stochastic ordering and…

### Regression Shrinkage and Selection via the Lasso

- Computer Science
- 1996

A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.

### HIGH DIMENSIONAL VARIABLE SELECTION.

- EconomicsAnnals of statistics
- 2009

This paper looks at the error rates and power of some multi-stage regression methods and considers three screening methods: the lasso, marginal regression, and forward stepwise regression.

### A knockoff filter for high-dimensional selective inference

- Computer Science, MathematicsThe Annals of Statistics
- 2019

It is proved that the high-dimensional knockoff procedure 'discovers' important variables as well as the directions (signs) of their effects, in such a way that the expected proportion of wrongly chosen signs is below the user-specified level.

### Ordinal Graphical Models: A Tale of Two Approaches

- Mathematics, Computer ScienceICML
- 2017

The theoretical developments allow us to provide correspondingly two classes of estimators that are not only computationally efficient but also have strong statistical guarantees.

### High-dimensional graphs and variable selection with the Lasso

- Computer Science
- 2006

It is shown that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs and is hence equivalent to variable selection for Gaussian linear models.

### Regression, Discrimination and Measurement Models for Ordered Categorical Variables

- Mathematics
- 1981

Regression models for the analysis of ordered categorical variables are discussed with particular reference to the logistic model. Maximum likelihood estimation procedures are established for three…

### Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models

- Computer ScienceNIPS
- 2010

The method has a clear interpretation: the authors use the least amount of regularization that simultaneously makes a graph sparse and replicable under random sampling, which requires essentially no conditions.