# Recursive Decomposition for Nonconvex Optimization

@article{Friesen2016RecursiveDF, title={Recursive Decomposition for Nonconvex Optimization}, author={Abram L. Friesen and Pedro M. Domingos}, journal={ArXiv}, year={2016}, volume={abs/1611.02755} }

Continuous optimization is an important problem in many areas of AI, including vision, robotics, probabilistic inference, and machine learning. Unfortunately, most real-world optimization problems are nonconvex, causing standard convex techniques to find only local optima, even with extensions like random restarts and simulated annealing. We observe that, in many cases, the local modes of the objective function have combinatorial structure, and thus ideas from combinatorial optimization can be…

## 32 Citations

Deep Learning as a Mixed Convex-Combinatorial Optimization Problem

- Computer ScienceICLR
- 2018

A recursive mini-batch algorithm for learning deep hard-threshold networks that includes the popular but poorly justified straight-through estimator as a special case is developed and shows that it improves classification accuracy in a number of settings.

Multiple Start Branch and Prune Filtering Algorithm for Nonconvex Optimization

- Computer ScienceWAFR
- 2016

This work introduces multiple start branch and prune filtering algorithm (MSBP), a Kalman filtering-based method for solving nonconvex optimization problems, and shows that it offers a better success rate at finding the optimal solution with less computation time.

The Sum-Product Theorem: A Foundation for Learning Tractable Models

- Computer ScienceICML
- 2016

This paper generalizes the principle of summation to a much broader set of learning problems: all those where inference consists of summing a function over a semiring, and shows empirically that this greatly out-performs the standard approach of learning without regard to the cost of optimization.

Semiring programming: A semantic framework for generalized sum product problems

- Computer ScienceInt. J. Approx. Reason.
- 2020

Derivative-Free Optimization of High-Dimensional Non-Convex Functions by Sequential Random Embeddings

- Computer ScienceIJCAI
- 2016

This paper describes the properties of random embedding for high-dimensional problems with low optimal e-effective dimensions, and proposes the sequential random embeddings (SRE) to reduce the embedding gap while running optimization algorithms in the low-dimensional spaces.

Turning High-Dimensional Optimization Into Computationally Expensive Optimization

- Computer ScienceIEEE Transactions on Evolutionary Computation
- 2018

It is suggested that searching a good solution to a subproblem can be viewed as a computationally expensive problem and can be addressed with the aid of meta-models, and a novel approach, namely self-evaluation evolution (SEE) is proposed.

Probabilistic Approaches for Pose Estimation

- Computer Science
- 2018

A surgical system that is capable of performing real-time tumor localization, hand-eye calibration, registration of preoperative models to the anatomy, and augmented reality is demonstrated.

A Novel Divide and Conquer Approach for Large-scale Optimization Problems

- Computer Science
- 2016

An approximation approach is proposed, named Divide and Approximate Conquer (DAC), which reduces the cost of partial solution evaluation from exponential time to polynomial time and the convergence to the global optimum is still guaranteed.

High-dimensional Black-box Optimization via Divide and Approximate Conquer

- Computer ScienceArXiv
- 2016

An approximation approach is proposed, named Divide and Approximate Conquer (DAC), which reduces the cost of partial solution evaluation from exponential time to polynomial time and the convergence to the global optimum is still guaranteed.

The Symbolic Interior Point Method

- Computer ScienceAAAI
- 2017

A rich modeling language, for which an interior-point method computes approximate solutions in a generic way is introduced, and the flexibility of the resulting symbolic-numeric optimizer on decision making and compressed sensing tasks with millions of non-zero entries is demonstrated.

## References

SHOWING 1-10 OF 55 REFERENCES

A coordinate gradient descent method for nonsmooth separable minimization

- MathematicsMath. Program.
- 2009

A (block) coordinate gradient descent method for solving this class of nonsmooth separable problems and establishes global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method.

Nonlinear Optimization

- Computer Science
- 2006

This book will help readers to understand the mathematical foundations of the modern theory and methods of nonlinear optimization and to analyze new problems, develop optimality theory for them, and choose or construct numerical solution methods.

Linear Programming Relaxations and Belief Propagation - An Empirical Study

- Computer ScienceJ. Mach. Learn. Res.
- 2006

This paper compares tree-reweighted belief propagation (TRBP) and powerful general-purpose LP solvers (CPLEX) on relaxations of real-world graphical models from the fields of computer vision and computational biology and finds that TRBP almost always finds the solution significantly faster than all the solvers in CPLEX and more importantly, TRBP can be applied to large scale problems for which the solver in CLEX cannot be applied.

Inexact block coordinate descent methods with application to non-negative matrix factorization

- Computer Science, Mathematics
- 2011

A general method allowing an approximate solution of each block minimization subproblem is devised and the related convergence analysis is developed, showing that the proposed inexact method has the same convergence properties of the standard nonlinear Gauss-Seidel method.

On the convergence of inexact block coordinate descent methods for constrained optimization

- Computer Science, MathematicsEur. J. Oper. Res.
- 2013

Globally convergent block-coordinate techniques for unconstrained optimization

- Mathematics, Computer Science
- 1999

New classes of globally convergent block-coordinate techniques for the unconstrained minimization of a continuously differentiable function and line-search-based schemes that may also include partial global inimizations with respect to some component are defined.

Solving #SAT and Bayesian Inference with Backtracking Search

- Computer ScienceJ. Artif. Intell. Res.
- 2009

It is shown that standard backtracking search when augmented with a simple memoization scheme (caching) can solve any sum-of-products problem with time complexity that is at least as good any other state- of-the-art exact algorithm, and that it can also achieve the best known time-space tradeoff.

Performing Bayesian Inference by Weighted Model Counting

- Computer ScienceAAAI
- 2005

An efficient translation from Bayesian networks to weighted model counting, extend the best model-counting algorithms to weightedmodel counting, develop an efficient method for computing all marginals in a single counting pass, and evaluate the approach on computationally challenging reasoning problems.

Bundle Adjustment in the Large

- Computer ScienceECCV
- 2010

The experiments show that truncated Newton methods, when paired with relatively simple preconditioners, offer state of the art performance for large-scale bundle adjustment.

Combining Component Caching and Clause Learning for Effective Model Counting

- Computer ScienceSAT
- 2004

A model-counting program that combines component caching with clause learning, one of the most important ideas used in modern SAT solvers, and provides significant evidence that it can outperform existing algorithms for #SAT by orders of magnitude.