• Corpus ID: 235417367

Neural Networks for Partially Linear Quantile Regression

@article{Zhong2021NeuralNF,
  title={Neural Networks for Partially Linear Quantile Regression},
  author={Qixian Zhong and Jane-ling Wang},
  journal={arXiv: Statistics Theory},
  year={2021}
}
Deep learning has enjoyed tremendous success in a variety of applications but its application to quantile regressions remains scarce. A major advantage of the deep learning approach is its flexibility to model complex data in a more parsimonious way than nonparametric smoothing methods. However, while deep learning brought breakthroughs in prediction, it often lacks interpretability due to the black-box nature of multilayer structure with millions of parameters, hence it is not well suited for… 

Figures and Tables from this paper

A unifying partially-interpretable framework for neural network-based extreme quantile regression

TLDR
A new methodological framework for performing extreme quantile regression using artificial neutral networks, which are able to capture complex non-linear relationships and scale well to high-dimensional data and a novel point process model for extreme values which overcomes the finite lower-endpoint problem associated with the generalised extreme value class of distributions.

References

SHOWING 1-10 OF 66 REFERENCES

Deep Neural Networks for Estimation and Inference

TLDR
This work studies deep neural networks and their use in semiparametric inference, and establishes novel nonasymptotic high probability bounds for deep feedforward neural nets for a general class of nonparametric regression‐type loss functions.

Partially linear additive quantile regression in ultra-high dimension

We consider a flexible semiparametric quantile regression model for analyzing high dimensional heterogeneous data. This model has several appealing features: (1) By considering different conditional

Wild residual bootstrap inference for penalized quantile regression with heteroscedastic errors

Summary We consider a heteroscedastic regression model in which some of the regression coefficients are zero but it is not known which ones. Penalized quantile regression is a useful approach for

Quantile Regression Neural Networks: A Bayesian Approach

TLDR
It is shown that the posterior distribution for feedforward neural network quantile regression is asymptotically consistent under a misspecified ALD model and this consistency proof embeds the problem from density estimation domain and uses bounds on the bracketing entropy to derive the posterior consistency over Hellinger neighborhoods.

Quantile regression with ReLU Networks: Estimators and minimax rates

TLDR
An upper bound on the expected mean squared error of a ReLU network used to estimate any quantile conditional on a set of covariates is derived, which implies ReLU networks with quantile regression achieve minimax rates for broad collections of function types.

Conditional Quantile Processes Based on Series or Many Regressors

Quantile regression (QR) is a principal regression method for analyzing the impact of covariates on outcomes. The impact is described by the conditional quantile function and its functionals. In this

Nonparametric regression using deep neural networks with ReLU activation function

TLDR
The theory suggests that for nonparametric regression, scaling the network depth with the sample size is natural and the analysis gives some insights into why multilayer feedforward neural networks perform well in practice.

On deep learning as a remedy for the curse of dimensionality in nonparametric regression

TLDR
It is shown that least squares estimates based on multilayer feedforward neural networks are able to circumvent the curse of dimensionality in nonparametric regression.

Quantile Processes for Semi and Nonparametric Regression

A collection of quantile curves provides a complete picture of conditional distributions. Properly centered and scaled versions of estimated curves at various quantile levels give rise to the

Adapting Neural Networks for the Estimation of Treatment Effects

TLDR
A new architecture is proposed, the Dragonnet, that exploits the sufficiency of the propensity score for estimation adjustment, and a regularization procedure is proposed that induces a bias towards models that have non-parametrically optimal asymptotic properties `out-of-the-box`.
...