Constrained Monotone $k$ -Submodular Function Maximization Using Multiobjective Evolutionary Algorithms With Theoretical Guarantee

@article{Qian2018ConstrainedM,
  title={Constrained Monotone  \$k\$ -Submodular Function Maximization Using Multiobjective Evolutionary Algorithms With Theoretical Guarantee},
  author={Chao Qian and Jing-Cheng Shi and Ke Tang and Zhi-Hua Zhou},
  journal={IEEE Transactions on Evolutionary Computation},
  year={2018},
  volume={22},
  pages={595-608}
}
The problem of maximizing monotone ${k}$ -submodular functions under a size constraint arises in many applications, and it is NP-hard. In this paper, we propose a new approach which employs a multiobjective evolutionary algorithm to maximize the given objective and minimize the size simultaneously. For general cases, we prove that the proposed method can obtain the asymptotically tight approximation guarantee, which was also achieved by the greedy algorithm. Moreover, we further give instances… 
Multi-Objective Submodular Maximization by Regret Ratio Minimization with Theoretical Guarantee
TLDR
It is proved that the regret ratio of the output of RRMS is upper bounded by 1−α+O( √ d− 1·( d k−d ) 1 d−1 ), where d is the number of objectives and this is the first theoretical guarantee for the situation with more than two objectives.
On Multiset Selection With Size Constraints
TLDR
This paper proposes an anytime randomized iterative approach POMS, which maximizes the given objective f and minimizes the multiset size simultaneously, and gives lower bounds on the submodularity ratio for the objectives of budget allocation.
Multiobjective Evolutionary Algorithms Are Still Good: Maximizing Monotone Approximately Submodular Minus Modular Functions
  • Chao Qian
  • Mathematics, Computer Science
    Evolutionary Computation
  • 2021
TLDR
It is proved that by optimizing the original objective function (g-c) and the size simultaneously, the GSEMO fails to achieve a good polynomial-time approximation guarantee, but it is also proven that by optimized a distorted objective function and thesize simultaneous, theGSEMO can still achieve the best-known polynometric- time approximation guarantee.
Multi-objective Evolutionary Algorithms are Generally Good: Maximizing Monotone Submodular Functions over Sequences
TLDR
It is proved that for each kind of previously studied monotone submodular objective functions over sequences, a simple multi-objective EA, i.e., GSEMO, can always reach or improve the best known approximation guarantee after running polynomial time in expectation.
Streaming algorithms for Budgeted k-Submodular Maximization problem
TLDR
This paper proposes two streaming algorithms that provide approximation guarantees for the Budgeted k-Submodular Maximization problem and proposes a deterministic streaming algorithm which provides an approximation ratio.
Minimizing Ratio of Monotone Non-submodular Functions
TLDR
This paper takes advantage of the greedy technique and gets a performance guarantee depending on the generalized curvature and inverse generalized curvatures of f, as well as the submodularity ratio of g.
Streaming k-Submodular Maximization under Noise subject to Size Constraint
TLDR
A more realistic scenario of this problem that (1) obtaining exact evaluation of an objective function is impractical, instead, its noisy version is acquired; and (2) algorithms are required to take only one single pass over dataset, producing solutions in a timely manner is investigated.
Sequence Selection by Pareto Optimization
TLDR
It is proved that for any previously studied objective function, POSeqSel using a reasonable time can always reach or improve the best known approximation guarantee.
Distributed Pareto Optimization for Large-Scale Noisy Subset Selection
  • Chao Qian
  • Computer Science
    IEEE Transactions on Evolutionary Computation
  • 2020
TLDR
This paper proposes a new distributed multiobjective evolutionary algorithm called DPONSS for large-scale noisy subset selection and proves its approximation guarantee under two common noise models, i.e., multiplicative noise and additive noise, which is significantly better than that of DPOSS.
Evolutionary Algorithms and Submodular Functions: Benefits of Heavy-Tailed Mutations
TLDR
This proposed mutation operator competes with pre-existing ones, when used by the (1+1) EA on classes of problems for which results on the other mutation operators are available, and finds a (1/3)-approximation ratio on any non-negative submodular function in polynomial time.
...
1
2
3
...

References

SHOWING 1-10 OF 37 REFERENCES
On Subset Selection with General Cost Constraints
TLDR
POMC is proposed, an anytime randomized iterative approach that can utilize more time to find better solutions than the generalized greedy algorithm, but can achieve better solutions in cases and applications.
Monotone k-Submodular Function Maximization with Size Constraints
TLDR
This paper gives constant-factor approximation algorithms for maximizing monotone k-submodular functions subject to several size constraints and experimentally demonstrates that these algorithms outperform baseline algorithms in terms of the solution quality.
Improved Approximation Algorithms for k-Submodular Function Maximization
TLDR
The hardness result implies that the algorithms are asymptotically tight, and the approach is extended to provide constant factor approximation algorithms for maximizing skewbisubmodular functions, which were recently introduced as generalizations of bisub modular functions.
On Greedy Maximization of Entropy
TLDR
The main goal of this paper is to explore and answer why the greedy selection does significantly better than the theoretical guarantee of (1 - 1/e) approximation ratio.
On the approximation ability of evolutionary optimization with application to minimum set cover
Towards Minimizing k-Submodular Functions
TLDR
A k-submodular polyhedron is defined, a Min-Max-Theorem is proved, a greedy algorithm is given to construct the vertices of the polyhedrons, and the known Min- Max Theorem for submodular and bisubmodular functions is generalized.
Approximating covering problems by randomized search heuristics using multi-objective models
TLDR
It is shown that optimal solutions can be approximated within a factor of log n using the multi-objective approach while the approximation quality obtainable by the single- objective approach in expected polynomial time may be arbitrarily bad.
Maximizing Bisubmodular and k-Submodular Functions
TLDR
This paper provides the first approximation guarantees for maximizing a general bisubmodular or k-submodular function and provides further intuition for the algorithm of Buchbinder et al. [FOCS'12] in the submodular case.
Computing Minimum Cuts by Randomized Search Heuristics
TLDR
It is proved that there exist instances of the minimum s-t-cut problem that cannot be solved by standard single-objective evolutionary algorithms in reasonable time and a bi-criteria approach is developed based on the famous maximum-flow minimum-cut theorem that enables evolutionary algorithms to find an optimal solution in expected polynomial time.
Lazier Than Lazy Greedy
TLDR
The first linear-time algorithm for maximizing a general monotone submodular function subject to a cardinality constraint is developed, and it is shown that the randomized algorithm, STOCHASTIC-GREEDY, can achieve a (1 — 1/e — e) approximation guarantee, in expectation, to the optimum solution in time linear in the size of the data.
...
1
2
3
4
...