The query complexity of certification

@article{Blanc2022TheQC,
  title={The query complexity of certification},
  author={Guy Blanc and Caleb M. Koch and Jane Lange and Li-Yang Tan},
  journal={Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing},
  year={2022}
}
  • Guy BlancCaleb M. Koch Li-Yang Tan
  • Published 19 January 2022
  • Computer Science, Mathematics
  • Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing
We study the problem of certification: given queries to a function f : {0,1}n → {0,1} with certificate complexity ≤ k and an input x⋆, output a size-k certificate for f’s value on x⋆. For monotone functions, a classic local search algorithm of Angluin accomplishes this task with n queries, which we show is optimal for local search algorithms. Our main result is a new algorithm for certifying monotone functions with O(k8 logn) queries, which comes close to matching the information-theoretic… 
4 Citations

Figures and Tables from this paper

Certification with an NP Oracle

This work considers certification with stricter instance-wise guarantees, and obtains an optimal inapproximability ratio, adding to a small handful of problems in the higher levels of the polynomial hierarchy for which optimal inassistability is known.

Logic-Based Explainability in Machine Learning

This paper overviews the ongoing research efforts on computing rigorous model-based explanations of ML models, including the actual definitions of explanations, the characterization of the complexity of computing explanations, and also how to make explanations interpretable for human decision makers, among others.

A query-optimal algorithm for finding counterfactuals

A lower bound is proved of S ( f ) Ω (∆ f ( x ⋆ )) +Ω(log d ) on the query complexity of any algorithm, thereby showing that the guarantees of the algorithm are essentially optimal.

An Optimal Algorithm for Certifying Monotone Functions

The algorithm makes O ( C ( f ) · log n ) queries to f, which matches the information-theoretic lower bound for this problem and resolves the concrete open question posed in the STOC ’22 paper of Blanc, Koch, Lange, and Tan.

References

SHOWING 1-10 OF 37 REFERENCES

Anchors: High-Precision Model-Agnostic Explanations

We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, "sufficient" conditions for predictions. We

Queries and concept learning

We consider the problem of using queries to learn an unknown concept. Several types of queries are described and studied: membership, equivalence, subset, superset, disjointness, and exhaustiveness

Every decision tree has an influential variable

A very easy proof that the randomized query complexity of nontrivial monotone graph properties is at least/spl Omega/(v/sup 4/3//p/sup 1/3/), where v is the number of vertices and p /spl les/ 1/2 is the critical threshold probability.

Provably efficient, succinct, and precise explanations

The approach of “explaining by implicit learning” shares elements of two previously disparate methods for post-hoc explanations, global and local explanations, and the case is made that it enjoys advantages of both.

Alibi Explain: Algorithms for Explaining Machine Learning Models

Alibi Explain, an open-source Python library for explaining predictions of machine learning models, is introduced with integrations into machine learning deployment platforms such as Seldon Core and KFServing, and distributed explanation capabilities using Ray.

Explanations for Monotonic Classifiers

Novel algorithms for the computation of one formal explanation of a (black-box) monotonic classifier are described, polynomial in the run time complexity of the classifier and the number of features.

Towards Trustable Explainable AI

This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern and shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically.

On The Reasons Behind Decisions

A theory for unveiling the reasons behind the decisions made by Boolean classifiers is presented and notions such as sufficient, necessary and complete reasons behind decisions are defined, in addition to classifier and decision bias.

On Validating, Repairing and Refining Heuristic ML Explanations

Earlier work to the case of boosted trees is extended and the quality of explanations obtained with state-of-the-art heuristic approaches are assessed.