Corpus ID: 14317003

Lazy Bayesian Rules: A Lazy Semi-Naive Bayesian Learning Technique Competitive to Boosting Decision Trees

@inproceedings{Zheng1999LazyBR,
  title={Lazy Bayesian Rules: A Lazy Semi-Naive Bayesian Learning Technique Competitive to Boosting Decision Trees},
  author={Zijian Zheng and Geoffrey I. Webb and Kai Ming Ting},
  booktitle={ICML},
  year={1999}
}
Lbr is a lazy semi-naive Bayesian classiier learning technique, designed to alleviate the attribute interdependence problem of naive Bayesian classiication. To classify a test example , it creates a conjunctive rule that selects a most appropriate subset of training examples and induces a local naive Bayesian classiier using this subset. Lbr can signii-cantly improve the performance of the naive Bayesian classiier. A bias and variance analysis of Lbr reveals that it signiicantly reduces the… Expand
Comparison of lazy Bayesian rule, and tree-augmented Bayesian learning
TLDR
The lazy Bayesian rule (LBR) and the tree-augmented naive Bayes (TAN) have demonstrated strong prediction accuracy, but their relative performance has never been evaluated, so these two techniques should be selected according to computational profile. Expand
Learning Lazy Rules to Improve the Performance of Classifiers
TLDR
It is shown empirically that LAZyRuLE improves the performances of naive Bayesian classifiers and majority vote and has the potential to be used for different types of base learning al­ gorithms. Expand
Semi-Lazy Learning: Combining Clustering and Classifiers to Build More Accurate Models
TLDR
The benefits of semi-lazy learning are introduced and the approach is considered as an example of the divide and conquer strategy used in many scientific fields to divide a complex problem into a set of simpler problems. Expand
Candidate Elimination Criteria for Lazy Bayesian Rules
  • Geoffrey I. Webb
  • Computer Science
  • Australian Joint Conference on Artificial Intelligence
  • 2001
TLDR
This paper explores alternatives to the candidate elimination criterion employed within Lazy Bayesian Rules, demonstrated to provide better overall error reduction than the use of a minimum data subset size criterion. Expand
Semi-naive Bayesian Classification
The success and popularity of naive Bayes (NB) has led to a field of research exploring algorithms that seek to retain its numerous strengths while reducing error by alleviating the attributeExpand
Efficient lazy elimination for averaged one-dependence estimators
TLDR
This work explores a new technique, Lazy Elimination, which eliminates highly related attribute-values at classification time without the computational overheads inherent in wrapper techniques, and shows that LE significantly reduces bias and error without undue computation, while BSE significant reduces bias but not error, with high training time complexity. Expand
A comparative study of Semi-naive Bayes methods in classification learning
TLDR
Eight typical semi-naive Bayesian learning algorithms are reviewed and error analysis using the bias-variance decomposition on thirty-six natural domains from the UCI Machine Learning Repository is performed. Expand
Alleviating naive Bayes attribute independence assumption by attribute weighting
TLDR
A weighted naive Bayes algorithm is proposed, called WANBIA, that selects weights to minimize either the negative conditional log likelihood or the mean squared error objective functions and is found to be a competitive alternative to state of the art classifiers like Random Forest, Logistic Regression and A1DE. Expand
A memory efficient semi-Naive Bayes classifier with grouping of cases
TLDR
The model presented is a competitive classifier with respect to the state of the art of semi-Naive Bayes classifiers, particularly in terms of quality of class probability estimates, but with a much lower memory space complexity. Expand
Not So Naive Bayes: Aggregating One-Dependence Estimators
TLDR
A new approach to weakening the attribute independence assumption by averaging all of a constrained class of classifiers is presented, which delivers comparable prediction accuracy to LBR and Super-Parent TAN with substantially improved computational efficiency at test timerelative to the former and at training time relative to the latter. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 49 REFERENCES
Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid
  • R. Kohavi
  • Mathematics, Computer Science
  • KDD
  • 1996
TLDR
A new algorithm, NBTree, is proposed, which induces a hybrid of decision-tree classifiers and Naive-Bayes classifiers: the decision-Tree nodes contain univariate splits as regular decision-trees, but the leaves contain Naïve-Bayesian classifiers. Expand
Eecient Learning of Selective Bayesian Network Classiiers
In this paper, we present a computation-ally eecient method for inducing selective Bayesian network classiiers. Our approach is to use information-theoretic metrics to ef-ciently select a subset ofExpand
Lazy Decision Trees
TLDR
This work proposes a lazy decision tree algorithm--LAZYDT--that conceptually constructs the "best" decision tree for each test instance, and is robust with respect to missing values without resorting to the complicated methods usually seen in induction of decision trees. Expand
Adjusted Probability Naive Bayesian Induction
TLDR
The use of this adjusted value in place of the naive Bayesian probability is shown to significantly improve predictive accuracy. Expand
Improving the Performance of Boosting for Naive Bayesian Classification
TLDR
The experimental results show that although introducing tree structures into naive Bayesian classification increases the average error of naiveBayesian classification for individual models, boosting naïve Bayesian classifiers with tree structures can achieve significantly lower average error than the naive Bayesesian classifier. Expand
Induction of Selective Bayesian Classifiers
TLDR
This paper embeds the naive Bayesian induction scheme within an algorithm that carries out a greedy search through the space of features, hypothesize that this approach will improve asymptotic accuracy in domains that involve correlated features without reducing the rate of learning in ones that do not. Expand
A decision-theoretic generalization of on-line learning and an application to boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand
Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier
TLDR
It is shown that the simple Bayesian classi er (SBC) does not in fact assume attribute independence, and can be optimal even when this assumption is violated by a wide margin, and the previously-assumed region of optimality is a second-order in nitesimal fraction of the actual one. Expand
Learning Limited Dependence Bayesian Classifiers
  • M. Sahami
  • Mathematics, Computer Science
  • KDD
  • 1996
TLDR
A framework for characterizing Bayesian classification methods is presented and a general induction algorithm is presented that allows for traversal of this spectrum depending on the available computational power for carrying out induction and its application in a number of domains with different properties. Expand
Semi-Naive Bayesian Classifier
In the paper the algorithm of the 'naive' Bayesian classifier (that assumes the independence of attributes) is extended to detect the dependencies between attributes. The idea is to optimize theExpand
...
1
2
3
4
5
...