• Corpus ID: 1954239

A Nearly-Linear Time Framework for Graph-Structured Sparsity

@inproceedings{Hegde2015ANT,
  title={A Nearly-Linear Time Framework for Graph-Structured Sparsity},
  author={Chinmay Hegde and Piotr Indyk and Ludwig Schmidt},
  booktitle={ICML},
  year={2015}
}
We introduce a framework for sparsity structures defined via graphs. Our approach is flexible and generalizes several previously studied sparsity models. Moreover, we provide efficient projection algorithms for our sparsity model that run in nearly-linear time. In the context of sparse recovery, we show that our framework achieves an information-theoretically optimal sample complexity for a wide range of parameters. We complement our theoretical analysis with experiments demonstrating that our… 

Figures from this paper

Information theoretic limits for linear prediction with graph-structured sparsity
TLDR
It is proved that sufficient number of samples for the weighted graph model proposed by Hegde and others is also necessary and the Fano's inequality on well constructed ensembles is used as the main tool in establishing information theoretic lower bounds.
A Fast Algorithm for Separated Sparsity via Perturbed Lagrangians
TLDR
A perturbed Lagrangian relaxation approach is provided that computes provably exact projection in only nearly-linear time for separated sparsity -- a fundamental sparsity notion that captures exclusion constraints in linearly ordered data such as time series.
A Generalized Matching Pursuit Approach for Graph-Structured Sparsity
TLDR
This paper focuses on sparsity-constrained optimization in cases where the cost function is a general nonlinear function and the sparsity constraint is defined by a graph-structured sparsity model, and presents the first work to present an efficient approximation algorithm, namely, Graph- Structured Matching Pursuit (Graph-Mp).
Stochastic Iterative Hard Thresholding for Graph-structured Sparsity Optimization
TLDR
This paper proposes a stochastic gradient-based method for solving graph-structured sparsity constraint problems, not restricted to the least square loss and proves that this algorithm enjoys a linear convergence up to a constant error, which is competitive with the counterparts in the batch learning setting.
Fast Algorithms for Structured Sparsity (ICALP 2015 Invited Tutorial)
TLDR
The concept of structured sparsity is introduced, the relevant algorithmic challenges are explained, and the best known algorithms for two sparsity models are described.
Better Approximations for Tree Sparsity in Nearly-Linear Time
TLDR
This work designs (1+e)-approximation algorithms for the Tree Sparsity problem that run in nearly-linear time, and shows that if the exact version of the TreeSparsity problem can be solved in strongly subquadratic time, then the (min, +) convolution problem can been solved in strong subquadraatic time as well.
Faster Training Algorithms for Structured Sparsity-Inducing Norm
TLDR
A more efficient solution for group lasso with arbitrary group overlap using an Inexact Proximal-Gradient method is developed, which is much more efficient than network-flow algorithm, while retaining the similar generalization performance.
Improved Algorithms For Structured Sparse Recovery
TLDR
This paper considers two structured sparsity models and obtains the first single criterion constant factor approximation algorithm for the head-approximation projection, the previous best known algorithm is a bicriterion approximation.
Graph-Structured Sparse Optimization for Connected Subgraph Detection
  • Baojian Zhou, F. Chen
  • Computer Science
    2016 IEEE 16th International Conference on Data Mining (ICDM)
  • 2016
TLDR
This paper explores efficient approximate projection oracles for connected subgraphs, and proposes two new efficient algorithms, namely, Graph-IHT and Graph-GHTP, to optimize a generic nonlinear objective function subject to connectivity constraint on the support of the variables.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 72 REFERENCES
A totally unimodular view of structured sparsity
TLDR
This paper describes a simple framework for structured sparse recovery based on convex optimization that unifies the prevalent structured sparsity norms in the literature, introduces new interesting ones, and renders their tightness and tractability arguments transparent.
Learning with structured sparsity
This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By
Structured sparsity through convex optimization
TLDR
It is shown that the $\ell_1$-norm can be extended to structured norms built on either disjoint or overlapping groups of variables, leading to a flexible framework that can deal with various structures.
Recovery of Clustered Sparse Signals from Compressive Measurements
TLDR
It is proved that O (K + C log(N/C))) random projections are sufficient for (K,C)-model sparse signal recovery based on subspace enumeration and provides a robust polynomialtime recovery algorithm for ( K,C-model sparse signals with provable estimation guarantees.
A Sparse-Group Lasso
TLDR
A regularized model for linear regression with ℓ1 andℓ2 penalties is introduced and it is shown that it has the desired effect of group-wise and within group sparsity.
Optimization with Sparsity-Inducing Penalties
TLDR
This monograph covers proximal methods, block-coordinate descent, reweighted l2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provides an extensive set of experiments to compare various algorithms from a computational point of view.
A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers
TLDR
A unified framework for establishing consistency and convergence rates for regularized M-estimators under high-dimensional scaling is provided and one main theorem is state and shown how it can be used to re-derive several existing results, and also to obtain several new results.
Approximation Algorithms for Model-Based Compressive Sensing
TLDR
A new framework that is approximation-tolerant model-CS, which includes a range of algorithms for sparse recovery that require only approximate solutions for the model-projection problem, andInstantiate this new framework for a new signal model that is particularly useful for signal ensembles, where the positions of the nonzero coefficients do not change significantly as a function of spatial location.
Tree-Guided Group Lasso for Multi-Task Regression with Structured Sparsity
TLDR
This work considers the problem of learning a sparse multi-task regression, where the structure in the outputs can be represented as a tree with leaf nodes as outputs and internal nodes as clusters of the outputs at multiple granularity, and proposes a structured regularization based on a group-lasso penalty.
Universal Measurement Bounds for Structured Sparse Signal Recovery
TLDR
It is shown that exploiting knowledge of groups can further reduce the number of measurements required for exact signal recovery, and derive universal bounds for the numberof measurements needed.
...
1
2
3
4
5
...