• Corpus ID: 232417721

Strong Optimal Classification Trees

  title={Strong Optimal Classification Trees},
  author={Sina Aghaei and Andr'es G'omez and Phebe Vayanos},
Decision trees are among the most popular machine learning (ML) models and are used routinely in applications ranging from revenue management and medicine to bioinformatics. In this paper, we consider the problem of learning optimal binary classification trees. Literature on the topic has burgeoned in recent years, motivated both by the empirical suboptimality of heuristic approaches and the tremendous improvements in mixed-integer optimization (MIO) technology. Yet, existing MIO-based… 

Figures and Tables from this paper

Mixed integer linear optimization formulations for learning optimal binary classification trees
This paper proposes four mixed integer linear optimization (MILO) formulations for designing optimal binary classification trees and provides theoretical comparisons between these formulations and the strongest flow-based MILO formulation of Aghaei et al. (2021).
Quant-BnB: A Scalable Branch-and-Bound Method for Optimal Decision Trees with Continuous Features
This paper presents a new discrete optimization method based on branch-and-bound (BnB) to obtain optimal decision trees and shows significant speedups compared to existing approaches for shallow optimal trees on various real datasets.
Fast Sparse Decision Tree Optimization via Reference Ensembles
This work addresses the problem of sparse decision tree optimization via smart guessing strategies that can be applied to any optimal branch-and-bound-based decision tree algorithm, and shows that in many cases it can rapidly construct sparse decision trees that match the accuracy of black box models.
A cautionary tale on fitting decision trees to data from additive models: generalization lower bounds
A sharp squared error generalization lower bound is proved for a large class of decision tree algorithmstted to sparse additive models with C 1 component functions, and a novel connection between decision tree estimation and rate-distortion theory, a sub-field of information theory is established.
Learning Optimal Fair Classification Trees
A mixed integer optimization (MIO) framework for learning optimal classification trees of fixed depth that can be conveniently augmented with arbitrary domain specific fairness constraints is proposed, showcasing its versatile modeling power that allows decision makers to fine-tune the trade-off between accuracy and fairness.
Learning Optimal Prescriptive Trees from Observational Data
It is shown that under mild conditions the proposed method for learning optimal prescriptive trees using mixed-integer optimization (MIO) technology is asymptotically exact in the sense that it converges to an optimal out-of-sample treatment assignment policy as the number of historical data samples tends to infinity.
Optimal Robust Classification Trees
This paper proposes a mixed-integer optimization formulation and a tailored solution algorithm for learning optimal classification trees that are robust to adversarial perturbations in the data features, and evaluates the performance on numerous publicly available datasets and compares it to a regularized, nonrobust optimal tree.


Optimal classification trees
Optimal classification trees are presented, a novel formulation of the decision tree problem using modern MIO techniques that yields the optimal decision tree for axes-aligned splits and synthetic tests demonstrate that these methods recover the true decision tree more closely than heuristics, refuting the notion that optimal methods overfit the training data.
Generalized and Scalable Optimal Sparse Decision Trees
The contribution in this work is to provide a general framework for decision tree optimization that addresses the two significant open problems in the area: treatment of imbalanced data and fully optimizing over continuous variables.
Sparsity in Optimal Randomized Classification Trees
MurTree: Optimal Classification Trees via Dynamic Programming and Search
This work provides a novel algorithm for learning optimal classification trees based on dynamic programming and search that supports constraints on the depth of the tree and number of nodes and it is argued it can be extended with other requirements.
Learning Optimal Decision Trees Using Constraint Programming
This paper proposes a new, more scalable approach based on Constraint programming for learning decision trees with a fixed maximum depth minimizing the classification error and presents a new approach for efficiently creating an optimal decision tree of limited depth.
Optimal randomized classification trees
Optimal Decision Trees for Nonlinear Metrics
This work proposes a novel algorithm based on bi-objective optimisation, which treats misclassifications of each binary class as a separate objective, and shows that, for a large class of metrics, the optimal tree lies on the Pareto frontier.
Optimal decision trees for categorical data via integer programming
A mixed integer programming formulation to construct optimal decision trees of a prespecified size that takes the special structure of categorical features into account and allow combinatorial decisions (based on subsets of values of features) at each node.
Optimal Sparse Decision Trees
This work introduces the first practical algorithm for optimal decision trees for binary variables, a co-design of analytical bounds that reduce the search space and modern systems techniques, including data structures and a custom bit-vector library.
The voice of optimization
This work redefines optimization as a multiclass classification problem where the predictor gives insights on the logic behind the optimal solution and OCTs and OCT-Hs give optimization a voice.