# Convex Influences

@article{De2022ConvexI, title={Convex Influences}, author={Anindya De and Shivam Nadimpalli and Rocco A. Servedio}, journal={ArXiv}, year={2022}, volume={abs/2109.03107} }

We introduce a new notion of influence for symmetric convex sets over Gaussian space, which we term “convex influence”. We show that this new notion of influence shares many of the familiar properties of influences of variables for monotone Boolean functions f : {± 1 } n → {± 1 } . Our main results for convex influences give Gaussian space analogues of many important results on influences for monotone Boolean functions. These include (robust) characterizations of extremal functions, the Poincar…

## References

SHOWING 1-10 OF 62 REFERENCES

### Concentration on the Boolean hypercube via pathwise stochastic analysis

- MathematicsSTOC
- 2020

We develop a new technique for proving concentration inequalities which relate between the variance and influences of Boolean functions. Second, we strengthen several classical inequalities…

### Quantitative Correlation Inequalities via Semigroup Interpolation

- MathematicsITCS
- 2021

A general approach is given that can be used to bootstrap many qualitative correlation inequalities for functions over product spaces into quantitative statements and it is shown that the quantitative version of Royen’s theorem is within a logarithmic factor of being optimal.

### Weak learning convex sets under normal distributions

- Computer Science, MathematicsCOLT
- 2021

This paper gives a poly(n)-time algorithm that can weakly learn the class of convex sets to advantage Ω(1/ √ n) using only random examples drawn from the background Gaussian distribution and gives an information-theoretic lower bound showing that O(log(n)/ √n) advantage is best possible even for algorithms that are allowed to make poly( n) many membership queries.

### On logarithmic concave measures and functions

- Mathematics
- 1973

The purpose of the present paper is to give a new proof for the main theorem proved in [3] and develop further properties of logarithmic concave measures and functions. Having in mind the…

### On the Fourier spectrum of monotone functions

- Computer Science, MathematicsJACM
- 1996

It is shown that this is tight in the sense that for any subexponential time algorithm there is a monotone Boolean function for which this algorithm cannot approximate with error better than O(1/√n).

### Every monotone graph property has a sharp threshold

- Mathematics
- 1996

In their seminal work which initiated random graph theory Erdos and Renyi discovered that many graph properties have sharp thresholds as the number of vertices tends to infinity. We prove a…

### Beyond Talagrand functions: new lower bounds for testing monotonicity and unateness

- Computer Science, MathematicsSTOC
- 2017

A lower bound of Ω(n1/3) is proved for the query complexity of any two-sided and adaptive algorithm that tests whether an unknown Boolean function f:{0, 1}n→ {0,1} is monotone versus far from monotones, a natural generalization of monotonicity.

### The accuracy of the Gaussian approximation to the sum of independent variates

- Mathematics
- 1941

The sum of finitely many variates possesses, under familiar conditions, an almost Gaussian probability distribution. This already much discussed "central limit theorem"(x) in the theory of…

### Theorems of KKL, Friedgut, and Talagrand via Random Restrictions and Log-Sobolev Inequality

- MathematicsElectron. Colloquium Comput. Complex.
- 2020

This work follows a new approach: looking at the first Fourier level of the function after a suitable random restriction and applying the Log-Sobolev inequality appropriately, and avoids using the hypercontractive inequality that is common to the original proofs.

### On learning monotone Boolean functions

- Computer Science, MathematicsProceedings 39th Annual Symposium on Foundations of Computer Science (Cat. No.98CB36280)
- 1998

A simple algorithm is described that achieves error at most 1/2-/spl Omega/(1//spl radic/n), improving on the previous best bound of O(log n), and it is proved that no algorithm, given a polynomial number of samples, can guarantee error.