Deterministic consensus maximization with biconvex programming

@article{Cai2018DeterministicCM,
  title={Deterministic consensus maximization with biconvex programming},
  author={Zhipeng Cai and Tat-Jun Chin and Huu Le and David Suter},
  journal={ArXiv},
  year={2018},
  volume={abs/1807.09436}
}
Consensus maximization is one of the most widely used robust fitting paradigms in computer vision, and the development of algorithms for consensus maximization is an active research topic. In this paper, we propose an efficient deterministic optimization algorithm for consensus maximization. Given an initial solution, our method conducts a deterministic search that forcibly increases the consensus of the initial solution. We show how each iteration of the update can be formulated as an instance… 
Consensus Maximization Tree Search Revisited
TLDR
It is shown that the consensus maximization tree structure used previously actually contains paths that connect nodes at both adjacent and non-adjacent levels, and a new acceleration strategy is proposed that avoids such redundant paths.
Consensus Maximisation Using Influences of Monotone Boolean Functions
TLDR
The connection between MaxCon problem and the abstract problem of finding the maximum upper zero of a Monotone Boolean Function defined over the Boolean Cube is outlined and it is shown that influences of points belonging to the largest structure in data would generally be smaller under certain conditions.
Convex Relaxations for Consensus and Non-Minimal Problems in 3D Vision
TLDR
The main contribution of this paper is the claim that a good approximate solution for many polynomial problems involved in 3D vision can be obtained using the existing theory of numerical computational algebra.
Efficient Deterministic Search with Robust Loss Functions for Geometric Model Fitting.
TLDR
This prospective study is intended to design efficient algorithms that benefit from a general optimization-based view, and a class of algorithms are introduced to perform deterministic search for the inliers or geometric model.
A Graduated Filter Method for Large Scale Robust Estimation
  • Huu Le, C. Zach
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
This paper introduces a novel solver for robust estimation that possesses a strong ability to escape poor local minima and adapts a filter framework from non-linear constrained optimization to automatically choose the level of relaxation in the original robust problem.
Robust Fitting in Computer Vision: Easy or Hard?
TLDR
This work presents several computational hardness results for consensus maximisation, underline the fundamental intractability of the problem, and resolve several ambiguities existing in the literature.
Globally Optimal Consensus Maximization for Relative Pose Estimation With Known Gravity Direction
TLDR
The proposed method employs the branch-and-bound algorithm to solve a consensus maximization problem, and thus it is able to obtain the global solution with a provable guarantee and shows that the method performs more robustly than existing methods.
A Hybrid Quantum-Classical Algorithm for Robust Fitting
TLDR
This work proposes a hybrid quantum-classical algorithm for robust fitting that solves a sequence of integer programs and terminates with a global solution or an error bound and represents a concrete application of quantum computing in computer vision.
...
...

References

SHOWING 1-10 OF 32 REFERENCES
An Exact Penalty Method for Locally Convergent Maximum Consensus
  • Huu Le, Tat-Jun Chin, D. Suter
  • Computer Science, Mathematics
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
This paper develops a Frank-Wolfe algorithm that can deterministically solve the maximum consensus problem and is much more practical on realistic input sizes.
Efficient globally optimal consensus maximisation with tree search
TLDR
It is shown how consensus maximisation for a wide variety of vision tasks can be posed as a tree search problem, which leads to a novel algorithm based on A* search that is several orders of magnitude faster than previous exact methods.
Maximum Consensus Parameter Estimation by Reweighted \ell _1 ℓ 1 Methods
TLDR
This work proposes a smooth surrogate function, the minimization of which leads to an extremely simple iteratively reweighted algorithm for MaxCon that is very efficient and in many cases, yields the global solution.
Maximum Consensus Parameter Estimation by Reweighted ℓ1 Methods
TLDR
This work proposes a smooth surrogate function, the minimization of which leads to an extremely simple iteratively reweighted algorithm for MaxCon that is very efficient and in many cases, yields the global solution.
Consensus Maximization with Linear Matrix Inequality Constraints
TLDR
This paper shows that the solution space can be reduced by introducing Linear Matrix Inequality (LMI) constraints, which leads to significant speed ups of the optimization time even for large amounts of outliers, while maintaining global optimality.
Guaranteed Outlier Removal with Mixed Integer Linear Programs
TLDR
This work proposes an algorithm based on mixed integer linear programming to perform guaranteed outlier removal as a technique to reduce the runtime of exact algorithms, and demonstrates that overall speedups of up to 80% can be achieved on common vision problems.
Consensus set maximization with guaranteed global optimality for robust geometry estimation
  • Hongdong Li
  • Computer Science
    2009 IEEE 12th International Conference on Computer Vision
  • 2009
TLDR
A new algorithm is presented that solves the largest consensus set maximization problem as a mixed integer programming (MIP) via a tailored branch-and-bound method, where the bounds are computed from the MIP's convex under-estimators.
Deterministically maximizing feasible subsystem for robust model fitting with unit norm constraint
TLDR
This work proposes a deterministic branch and bound method to solve the MaxFS problem with guaranteed global optimality and introduces a piecewise linear relaxation to build very tight under- and over-estimators for square terms by partitioning variable bounds into smaller segments.
Quasiconvex Optimization for Robust Geometric Reconstruction
  • Qifa Ke, T. Kanade
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2007
TLDR
This paper presents a novel quasiconvex optimization framework in which the geometric reconstruction problems are formulated as a small number of small-scale convex programs that are readily solvable and provides an intuitive method to handle directional uncertainties and outliers in measurements.
USAC: A Universal Framework for Random Sample Consensus
TLDR
A comprehensive overview of recent research in RANSAC-based robust estimation is presented by analyzing and comparing various approaches that have been explored over the years and introducing a new framework for robust estimation, which is called Universal RANSac (USAC).
...
...