• Corpus ID: 215814317

Beyond Trees: Classification with Sparse Pairwise Dependencies

@article{Tenzer2020BeyondTC,
  title={Beyond Trees: Classification with Sparse Pairwise Dependencies},
  author={Yaniv Tenzer and Amit Moscovich and Mary Frances Dorn and Boaz Nadler and Clifford H. Spiegelman},
  journal={J. Mach. Learn. Res.},
  year={2020},
  volume={21},
  pages={189:1-189:33}
}
Several classification methods assume that the underlying distributions follow tree-structured graphical models. Indeed, trees capture statistical dependencies between pairs of variables, which may be crucial to attain low classification errors. The resulting classifier is linear in the log-transformed univariate and bivariate densities that correspond to the tree edges. In practice, however, observed data may not be well approximated by trees. Yet, motivated by the importance of pairwise… 
1 Citations
New method for solving Ivanov regularization-based support vector machine learning
  • Xiang Xu, Daoli Zhu
  • Computer Science
    Comput. Oper. Res.
  • 2021
TLDR
This paper provides a parallel block minimization framework for solving the dual I-SVM problem that exploits the advantages of the randomized primal–dual coordinate (RPDC) method, and every iteration-based sub-optimization RPDC routine has a simple closed-form.

References

SHOWING 1-10 OF 38 REFERENCES
Learning Graphical Models for Hypothesis Testing and Classification
TLDR
A novel method to learn tree-structured graphical models which optimizes an approximation of the log-likelihood ratio and provides a method to identify a subset of the edges that are most salient for discrimination.
Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification
TLDR
FANS is a high-dimensional classification method that involves nonparametric feature augmentation that is related to generalized additive models, but has better interpretability and computability.
Graphical models via univariate exponential family distributions
TLDR
This paper considers a general sub-class of graphical models where the node-wise conditional distributions arise from exponential families, and derives multivariate graphical model distributions from univariate exponential family distributions, such as the Poisson, negative binomial, and exponential distributions.
Bayesian Network Classifiers
TLDR
Tree Augmented Naive Bayes (TAN) is single out, which outperforms naive Bayes, yet at the same time maintains the computational simplicity and robustness that characterize naive Baye.
Forest Density Estimation
TLDR
It is proved that finding a maximum weight spanning forest with restricted tree size is NP-hard, and an approximation algorithm is developed for this problem.
A nonparametric approach based on a Markov like property for classification
Abstract We suggest a new approach for classification based on nonparametricly estimated likelihoods. Due to the scarcity of data in high dimensions, full nonparametric estimation of the likelihood
Copula Network Classifiers (CNCs)
TLDR
Copula Network Classifiers (CNCs) are introduced, a model that combines the flexibility of a graph based representation with the modeling power of copulas that has better overall performance than linear and nonlinear standard generative models, as well as discriminative RBF and polynomial kernel SVMs.
Learning Bayesian network classifiers by maximizing conditional likelihood
TLDR
It is shown that a simple approximation---choosing structures by maximizing conditional likelihood while setting parameters by maximum likelihood---yields good results and produces better class probability estimates than naive Bayes, TAN, and generatively-trained Bayesian networks.
Simultaneous Inference for Pairwise Graphical Models with Generalized Score Matching
TLDR
It is proved that the proposed estimator is $\sqrt{n}$-consistent and asymptotically normal, which allows us to construct confidence intervals and build hypothesis tests for edge parameters and it is shown how the proposed method can be applied to test hypotheses that involve a large number of model parameters simultaneously.
Learning Max-Margin Tree Predictors
TLDR
This work addresses the challenge of learning tree structured predictive models that achieve high accuracy while at the same time facilitate efficient (linear time) inference by proving that this task is in general NP-hard, and suggesting an approximate alternative.
...
1
2
3
4
...