• Corpus ID: 220623774

Reflection methods for user-friendly submodular optimization

@inproceedings{Jegelka2013ReflectionMF,
  title={Reflection methods for user-friendly submodular optimization},
  author={Stefanie Jegelka and Francis R. Bach and Suvrit Sra},
  booktitle={NIPS},
  year={2013}
}
Recently, it has become evident that submodularity naturally captures widely occurring concepts in machine learning, signal processing and computer vision. Consequently, there is need for efficient optimization procedures for submodular functions, especially for minimization problems. While general submodular minimization is challenging, we propose a new method that exploits existing decomposability of submodular functions. In contrast to previous approaches, our method is neither approximate… 

Figures and Tables from this paper

Convex Optimization for Parallel Energy Minimization

This work reformulates the quadratic energy minimization problem as a total variation denoising problem, which, when viewed geometrically, enables the use of projection and reflection based convex methods and performs an extensive empirical evaluation comparing state-of-the-art combinatorial algorithms and convex optimization techniques.

Active-set Methods for Submodular Minimization Problems

A new active-set algorithm for total variation denoising with the assumption of an oracle that solves the corresponding SFM problem is proposed, which performs local descent over ordered partitions and its ability to warm start considerably improves the performance of the algorithm.

Provable Submodular Minimization using Wolfe's Algorithm

A maiden convergence analysis of Wolfe's algorithm is given and a robust version of Fujishige's theorem is proved which shows that an O(1/n2)-approximate solution to the min-norm point on the base polytope implies exact submodular minimization.

Provable Submodular Minimization via Fujishige-Wolfe ’ s Algorithm ∗

This paper proves that in t iterations, Wolfe’s algorithm returns an O(1/t)-approximate solution to the min-norm point on any polytope, and gets the first pseudopolynomial time guarantee for the Fujishige-Wolfe minimum norm algorithm for unconstrained submodular function minimization.

Learning with Submodular Functions: A Convex Optimization Perspective

  • F. Bach
  • Computer Science
    Found. Trends Mach. Learn.
  • 2013
In Learning with Submodular Functions: A Convex Optimization Perspective, the theory of submodular functions is presented in a self-contained way from a convex analysis perspective, presenting tight links between certain polyhedra, combinatorial optimization and convex optimization problems.

Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

The benefits which can be drawn from primal-dual algo rithms both for solving large-scale convex optimization problems and discrete ones, and the overview of numerical approaches which have been proposed in different contexts are shown.

Exploiting Sum of Submodular Structure for Inference in Very High Order MRF-MAP Problems

This paper adapts two SFM algorithms to exploit the sum of submodular structure, thereby helping them scale to large number of pixels while maintaining scalability with large clique sizes.

An overview of recent primal – dual approaches for solving large-scale optimization problems ] Playing with Duality

This article aims to present the principles of primal–dual approaches while providing an overview of the numerical methods that have been proposed in different contexts and lead to algorithms that are easily parallelizable.

Approximate Decomposable Submodular Function Minimization for Cardinality-Based Components

This work develops the first approximation algorithms for this problem, where the approximations can be quickly computed via reduction to a sparse graph cut problem, with graph sparsity controlled by the desired approximation factor.

Quadratic Decomposable Submodular Function Minimization: Theory and Practice

  • Pan Li
  • Computer Science, Mathematics
  • 2020
A new convex optimization problem, termed quadratic decomposable submodular function minimization (QDSFM), which allows to model a number of learning tasks on graphs and hypergraphs and two new applications of QDSFM are described: hypergraph-adapted PageRank and semi-supervised learning.
...

References

SHOWING 1-10 OF 47 REFERENCES

Convex Analysis for Minimizing and Learning Submodular Set Functions

A novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions, is developed and an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals is demonstrated.

Learning with Submodular Functions: A Convex Optimization Perspective

  • F. Bach
  • Computer Science
    Found. Trends Mach. Learn.
  • 2013
In Learning with Submodular Functions: A Convex Optimization Perspective, the theory of submodular functions is presented in a self-contained way from a convex analysis perspective, presenting tight links between certain polyhedra, combinatorial optimization and convex optimization problems.

Efficient Minimization of Decomposable Submodular Functions

This paper develops an algorithm, SLG, that can efficiently minimize decomposable submodular functions with tens of thousands of variables, and applies it to synthetic benchmarks and a joint classification-and-segmentation task, and shows that it outperforms the state-of-the-art general purpose sub modular minimization algorithms by several orders of magnitude.

A study of Nesterov's scheme for Lagrangian decomposition and MAP labeling

This paper focuses specifically on Nes-terov's optimal first-order optimization scheme for non-smooth convex programs, and shows that in order to obtain an efficiently convergent iteration, this approach should be augmented with a dynamic estimation of a corresponding Lip-schitz constant.

Proximal Methods for Hierarchical Sparse Coding

The procedure has a complexity linear, or close to linear, in the number of atoms, and allows the use of accelerated gradient techniques to solve the tree-structured sparse approximation problem at the same computational cost as traditional ones using the l1-norm.

Proximal Splitting Methods in Signal Processing

The basic properties of proximity operators which are relevant to signal processing and optimization methods based on these operators are reviewed and proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework.

Convergence Rate Analysis of MAP Coordinate Minimization Algorithms

A thorough rate analysis of linear programming relaxations schemes is performed and a simple dual to primal mapping is provided that yields feasible primal solutions with a guaranteed rate of convergence.

Efficient solutions to relaxations of combinatorial problems with submodular penalties via the Lovász extension and non-smooth convex optimization

This work designs FPTAS for the authors' relaxation that can be used to design approximation algorithms for the original problem for the metric case and proposes the use of simple and recent algorithms for non-smooth convex optimization due to Nesterov to approximately solve them.

Fast Newton-type Methods for Total Variation Regularization

This work studies anisotropic (l1-based) TV and also a related l2-norm variant and develops Newton-type methods that outperform the state-of-the-art algorithms for solving the harder task of computing 2- (and higher)-dimensional TV proximity.

A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems

A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.