# Approximate Douglas–Rachford algorithm for two-sets convex feasibility problems

```@article{DazMilln2021ApproximateDA,
title={Approximate Douglas–Rachford algorithm for two-sets convex feasibility problems},
author={R. D{\'i}az Mill{\'a}n and Orizon Pereira Ferreira and Julien Ugon},
journal={Journal of Global Optimization},
year={2021},
pages={1-16}
}```
• Published 27 May 2021
• Computer Science
• Journal of Global Optimization
In this paper, we propose a new algorithm combining the Douglas–Rachford (DR) algorithm and the Frank–Wolfe algorithm, also known as the conditional gradient (CondG) method, for solving the classic convex feasibility problem. Within the algorithm, which will be named Approximate Douglas–Rachford (ApDR) algorithm , the CondG method is used as a subroutine to compute feasible inexact projections on the sets under consideration, and the ApDR iteration is defined based on the DR iteration. The ApDR…
1 Citation
Mathematics
• 2022
In 1933 von Neumann proved a beautiful result that one can approximate a point in the intersection of two convex sets by alternating projections, i.e., successively projecting on one set and then the
Mathematics
Math. Methods Oper. Res.
• 2020
This self-contained tutorial develops the convergence theory of projection algorithms within the framework of fixed point iterations, and explains how to devise useful feasibility problem formulations, and demonstrates the application of the Douglas–Rachford method to said formulations.
Mathematics
Numerical Algorithms
• 2018
This paper proposes and studies the iteration complexity of an inexact DRS method and a Douglas-Rachford-Tseng’s forward-backward (F-B) splitting method for solving two-operator and four-operator monotone inclusions, respectively.
Computer Science
IEEE Transactions on Signal Processing
• 2014
This work considers elementary methods based on projections for solving a sparse feasibility problem without employing convex heuristics, and applies different analytical tools that allow us to show global linear convergence of alternating projections under familiar constraint qualifications.
Mathematics
Computational Optimization and Applications
• 2021
The classical convex feasibility problem in a finite dimensional Euclidean space is studied and alternating projection method with CondG method is combined to design a new method which can be seen as an inexact feasible version of alternate projection method.
Computer Science
Math. Program.
• 2018
A new approximate version of the alternating direction method of multipliers (ADMM) which uses a relative error criterion and is computationally evaluate on several classes of test problems and finds that it significantly outperforms several alternatives on one problem class.
Computer Science
ICML
• 2015
This paper proves that the vanila FW method converges at a rate of 1/t2, and shows that various balls induced by lp norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution.
Computer Science
Optim. Methods Softw.
• 2022
Under reasonable assumptions, it is shown that an ϵ-approximate solution is obtained in at most gradient evaluations and linear oracle calls for the proposed method to obtain an approximate solution.
A new general framework for convex optimization over matrix factorizations, where every Frank-Wolfe iteration will consist of a low-rank update, is presented, and the broad application areas of this approach are discussed.
Mathematics
Comput. Optim. Appl.
• 2019
A new version of Newton’s method for solving constrained generalized equations with a procedure to obtain a feasible inexact projection is addressed, using the contraction mapping principle to establish a local analysis of the proposed method under appropriate assumptions.
Computer Science
• 2020
The SOCGS algorithm is presented, which uses a projection-free algorithm to solve the constrained quadratic subproblems inexactly and is useful when the feasible region can only be accessed efficiently through a linear optimization oracle, and computing first-order information of the function, although possible, is costly.