• Corpus ID: 233864822

Scaling up Memory-Efficient Formal Verification Tools for Tree Ensembles

  title={Scaling up Memory-Efficient Formal Verification Tools for Tree Ensembles},
  author={John T{\"o}rnblom and Simin Nadjm-Tehrani},
To guarantee that machine learning models yield outputs that are not only accurate, but also robust, recent works propose formally verifying robustness properties of machine learning models. To be applicable to realistic safety-critical systems, the used verification algorithms need to manage the combinatorial explosion resulting from vast variations in the input domain, and be able to verify correctness properties derived from versatile and domain-specific requirements. In this paper, we… 
1 Citations

Figures and Tables from this paper

Versatile Verification of Tree Ensembles

This paper introduces a generic algorithm called Veritas that enables tackling multiple different verification tasks for tree ensemble models like random forests (RFs) and gradient boosting decision trees (GBDT) and introduces a novel search space representation.



Abstract Interpretation of Decision Tree Ensemble Classifiers

Interpretation of Decision Tree Ensemble Classifiers Francesco Ranzato, Marco Zanella Dipartimento di Matematica, University of Padova, Italy {ranzato, mzanella}@math.unipd.it

CatBoost: unbiased boosting with categorical features

This paper presents the key algorithmic techniques behind CatBoost, a new gradient boosting toolkit and provides a detailed analysis of this problem and demonstrates that proposed algorithms solve it effectively, leading to excellent empirical results.

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.

An Abstraction-Refinement Approach to Formal Verification of Tree Ensembles

Recent advances in machine learning are now being considered for integration in safety-critical systems such as vehicles, medical equipment and critical infrastructure. However, organizations in

Verifying Tree Ensembles by Reasoning about Potential Instances

A strategy is presented that can prune part of the input space given the question asked to simplify the problem and follows a divide and conquer approach that is incremental and can always return some answers and indicates which parts of theinput domains are still uncertain.

Guaranteeing Safety for Neural Network-Based Aircraft Collision Avoidance Systems

A method to provide safety guarantees when using a neural network collision avoidance system is proposed and experiments with systems inspired by ACAS X show that neural networks giving either horizontal or vertical maneuvers can be proven safe.

A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI

A review on interpretabilities suggested by different research works and categorize them is provided, hoping that insight into interpretability will be born with more considerations for medical practices and initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.

Formal Verification of Input-Output Mappings of Tree Ensembles

TOOLympics 2019: An Overview of Competitions in Formal Methods

Competitions are a suitable way to bring together the community and to compare the state of the art, in order to identify progress of and new challenges in the research area.