• Corpus ID: 8255139

Recursive Decomposition for Nonconvex Optimization

@article{Friesen2016RecursiveDF,
  title={Recursive Decomposition for Nonconvex Optimization},
  author={Abram L. Friesen and Pedro M. Domingos},
  journal={ArXiv},
  year={2016},
  volume={abs/1611.02755}
}
Continuous optimization is an important problem in many areas of AI, including vision, robotics, probabilistic inference, and machine learning. Unfortunately, most real-world optimization problems are nonconvex, causing standard convex techniques to find only local optima, even with extensions like random restarts and simulated annealing. We observe that, in many cases, the local modes of the objective function have combinatorial structure, and thus ideas from combinatorial optimization can be… 

Figures from this paper

Deep Learning as a Mixed Convex-Combinatorial Optimization Problem
TLDR
A recursive mini-batch algorithm for learning deep hard-threshold networks that includes the popular but poorly justified straight-through estimator as a special case is developed and shows that it improves classification accuracy in a number of settings.
Multiple Start Branch and Prune Filtering Algorithm for Nonconvex Optimization
TLDR
This work introduces multiple start branch and prune filtering algorithm (MSBP), a Kalman filtering-based method for solving nonconvex optimization problems, and shows that it offers a better success rate at finding the optimal solution with less computation time.
The Sum-Product Theorem: A Foundation for Learning Tractable Models
TLDR
This paper generalizes the principle of summation to a much broader set of learning problems: all those where inference consists of summing a function over a semiring, and shows empirically that this greatly out-performs the standard approach of learning without regard to the cost of optimization.
Derivative-Free Optimization of High-Dimensional Non-Convex Functions by Sequential Random Embeddings
TLDR
This paper describes the properties of random embedding for high-dimensional problems with low optimal e-effective dimensions, and proposes the sequential random embeddings (SRE) to reduce the embedding gap while running optimization algorithms in the low-dimensional spaces.
Turning High-Dimensional Optimization Into Computationally Expensive Optimization
TLDR
It is suggested that searching a good solution to a subproblem can be viewed as a computationally expensive problem and can be addressed with the aid of meta-models, and a novel approach, namely self-evaluation evolution (SEE) is proposed.
Probabilistic Approaches for Pose Estimation
TLDR
A surgical system that is capable of performing real-time tumor localization, hand-eye calibration, registration of preoperative models to the anatomy, and augmented reality is demonstrated.
A Novel Divide and Conquer Approach for Large-scale Optimization Problems
TLDR
An approximation approach is proposed, named Divide and Approximate Conquer (DAC), which reduces the cost of partial solution evaluation from exponential time to polynomial time and the convergence to the global optimum is still guaranteed.
High-dimensional Black-box Optimization via Divide and Approximate Conquer
TLDR
An approximation approach is proposed, named Divide and Approximate Conquer (DAC), which reduces the cost of partial solution evaluation from exponential time to polynomial time and the convergence to the global optimum is still guaranteed.
The Symbolic Interior Point Method
TLDR
A rich modeling language, for which an interior-point method computes approximate solutions in a generic way is introduced, and the flexibility of the resulting symbolic-numeric optimizer on decision making and compressed sensing tasks with millions of non-zero entries is demonstrated.
...
1
2
3
4
...

References

SHOWING 1-10 OF 55 REFERENCES
A coordinate gradient descent method for nonsmooth separable minimization
TLDR
A (block) coordinate gradient descent method for solving this class of nonsmooth separable problems and establishes global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method.
Nonlinear Optimization
TLDR
This book will help readers to understand the mathematical foundations of the modern theory and methods of nonlinear optimization and to analyze new problems, develop optimality theory for them, and choose or construct numerical solution methods.
Linear Programming Relaxations and Belief Propagation - An Empirical Study
TLDR
This paper compares tree-reweighted belief propagation (TRBP) and powerful general-purpose LP solvers (CPLEX) on relaxations of real-world graphical models from the fields of computer vision and computational biology and finds that TRBP almost always finds the solution significantly faster than all the solvers in CPLEX and more importantly, TRBP can be applied to large scale problems for which the solver in CLEX cannot be applied.
Inexact block coordinate descent methods with application to non-negative matrix factorization
TLDR
A general method allowing an approximate solution of each block minimization subproblem is devised and the related convergence analysis is developed, showing that the proposed inexact method has the same convergence properties of the standard nonlinear Gauss-Seidel method.
Globally convergent block-coordinate techniques for unconstrained optimization
TLDR
New classes of globally convergent block-coordinate techniques for the unconstrained minimization of a continuously differentiable function and line-search-based schemes that may also include partial global inimizations with respect to some component are defined.
Solving #SAT and Bayesian Inference with Backtracking Search
TLDR
It is shown that standard backtracking search when augmented with a simple memoization scheme (caching) can solve any sum-of-products problem with time complexity that is at least as good any other state- of-the-art exact algorithm, and that it can also achieve the best known time-space tradeoff.
Performing Bayesian Inference by Weighted Model Counting
TLDR
An efficient translation from Bayesian networks to weighted model counting, extend the best model-counting algorithms to weightedmodel counting, develop an efficient method for computing all marginals in a single counting pass, and evaluate the approach on computationally challenging reasoning problems.
Bundle Adjustment in the Large
TLDR
The experiments show that truncated Newton methods, when paired with relatively simple preconditioners, offer state of the art performance for large-scale bundle adjustment.
Combining Component Caching and Clause Learning for Effective Model Counting
TLDR
A model-counting program that combines component caching with clause learning, one of the most important ideas used in modern SAT solvers, and provides significant evidence that it can outperform existing algorithms for #SAT by orders of magnitude.
...
1
2
3
4
5
...