Sharper Sub-Weibull Concentrations

@article{Zhang2021SharperSC,
  title={Sharper Sub-Weibull Concentrations},
  author={Huiming Zhang and Haoyu Wei},
  journal={Mathematics},
  year={2021}
}
Constant-specified and exponential concentration inequalities play an essential role in the finite-sample theory of machine learning and high-dimensional statistics area. We obtain sharper and constants-specified concentration inequalities for the sum of independent sub-Weibull random variables, which leads to a mixture of two tails: sub-Gaussian for small deviations and sub-Weibull for large deviations from the mean. These bounds are new and improve existing bounds with sharper constants. In… 

Figures from this paper

Asymptotic in a class of network models with an increasing sub-Gamma degree sequence

The degree sequences of the binary networks under a general noisy mechanism with the discrete Laplace mechanism as a special case are released and the asymptotic result including both consistency and asymptonically normality of the parameter estimator when the number of parameters goes to infinity in a class of network models is established.

Asymptotic normality and confidence region for Catoni's Z estimator

This paper studies the asymptotic property and confidence region of Catoni's Z estimator under the finite variance assumption. First, we investigate the CLT for Catoni's Z estimator with the

Can Direct Latent Model Learning Solve Linear Quadratic Gaussian Control?

This work focuses on an intu-itive cost-driven state representation learning method for solving Linear Quadratic Gaussian (LQG) control, one of the most fundamental partially observable control problems.

Concentration inequalities of MLE and robust MLE

The Maximum Likelihood Estimator (MLE) serves an important role in statistics and machine learning. In this article, for i.i.d

On Unifying Randomized Methods For Inverse Problems

  • Jonathan WittmerC. G. KrishnanunniHai V. NguyenTan Bui-Thanh
  • Mathematics, Computer Science
  • 2023
This work unifies the analysis of various randomized methods for solving linear and nonlinear inverse problems by framing the problem in a stochastic optimization setting and shows that many randomized methods are variants of a sample average approximation, and proves a single theoretical result that guarantees the asymptotic convergence for a variety of randomized methods.

Asymptotics of Subsampling for Generalized Linear Regression Models under Unbounded Design

The optimal subsampling is an statistical methodology for generalized linear models (GLMs) to make inference quickly about parameter estimation in massive data regression. Existing literature only

References

SHOWING 1-10 OF 30 REFERENCES

Moving Beyond Sub-Gaussianity in High-Dimensional Statistics: Applications in Covariance Estimation and Linear Regression

These results extract a part sub-Gaussian tail behavior in finite samples, matching the asymptotics governed by the central limit theorem, and are compactly represented in terms of a new Orlicz quasi-norm - the Generalized Bernstein-Orlicz norm - that typifies such tail behaviors.

On the non‐asymptotic and sharp lower tail bounds of random variables

This paper introduces systematic and user‐friendly schemes for developing non‐asymptotic lower bounds of tail probabilities and establishes matching upper and lower bounds for extreme value expectation of the sum of independent sub‐Gaussian and sub‐exponential random variables.

Concentration Inequalities for Statistical Inference

This paper gives a review of concentration inequalities which are widely employed in analyzes of mathematical statistics in a wide range of settings, from distribution free to distribution dependent,

Concentration inequalities for polynomials in α-sub-exponential random variables

We derive multi-level concentration inequalities for polynomials in independent random variables with an α-sub-exponential tail decay. A particularly interesting case is given by quadratic forms

Deterministic Inequalities for Smooth M-estimators

Ever since the proof of asymptotic normality of maximum likelihood estimator by Cramer (1946), it has been understood that a basic technique of the Taylor series expansion suffices for asymptotics of

LASSO GUARANTEES FOR β-MIXING HEAVY TAILED TIME SERIES ∗ By

This work derives non-asymptotic inequalities for estimation error and prediction error of lasso estimate of the best linear predictor without assuming any special parametric form of the DGM, and relies only on (strict) stationarity and geometrically decaying βmixing coefficients to establish error bounds for lasso for subweibull random vectors.

Elastic-net Regularized High-dimensional Negative Binomial Regression: Consistency and Weak Signals Detection

We study sparse negative binomial regression (NBR) for count data by showing non-asymptotic merits of the Elastic-net estimator. Two types of oracle inequalities are derived for the Elastic-net

Sub‐Weibull distributions: Generalizing sub‐Gaussian and sub‐Exponential properties to heavier tailed distributions

We propose the notion of sub‐Weibull distributions, which are characterized by tails lighter than (or equally light as) the right tail of a Weibull distribution. This novel class generalizes the

Non-Asymptotic Guarantees for Robust Statistical Learning under (1+ε)-th Moment Assumption

A log-truncated M-estimator is proposed for a large family of statistical regressions and its excess risk bound is established under the condition that the data have (1 + ε )-th moment with ε ∈ (0, 1].

COM-negative binomial distribution: modeling overdispersion and ultrahigh zero-inflated count data

We focus on the COM-type negative binomial distribution with three parameters, which belongs to COM-type (a, b, 0) class distributions and family of equilibrium distributions of arbitrary birth-death