# Lazy Bayesian Rules: A Lazy Semi-Naive Bayesian Learning Technique Competitive to Boosting Decision Trees

@inproceedings{Zheng1999LazyBR, title={Lazy Bayesian Rules: A Lazy Semi-Naive Bayesian Learning Technique Competitive to Boosting Decision Trees}, author={Zijian Zheng and Geoffrey I. Webb and Kai Ming Ting}, booktitle={ICML}, year={1999} }

Lbr is a lazy semi-naive Bayesian classiier learning technique, designed to alleviate the attribute interdependence problem of naive Bayesian classiication. To classify a test example , it creates a conjunctive rule that selects a most appropriate subset of training examples and induces a local naive Bayesian classiier using this subset. Lbr can signii-cantly improve the performance of the naive Bayesian classiier. A bias and variance analysis of Lbr reveals that it signiicantly reduces the… Expand

#### Topics from this paper

#### 44 Citations

Comparison of lazy Bayesian rule, and tree-augmented Bayesian learning

- Computer Science
- 2002 IEEE International Conference on Data Mining, 2002. Proceedings.
- 2002

The lazy Bayesian rule (LBR) and the tree-augmented naive Bayes (TAN) have demonstrated strong prediction accuracy, but their relative performance has never been evaluated, so these two techniques should be selected according to computational profile. Expand

Learning Lazy Rules to Improve the Performance of Classifiers

- Computer Science
- 2000

It is shown empirically that LAZyRuLE improves the performances of naive Bayesian classifiers and majority vote and has the potential to be used for different types of base learning al gorithms. Expand

Semi-Lazy Learning: Combining Clustering and Classifiers to Build More Accurate Models

- Computer Science
- 2003

The benefits of semi-lazy learning are introduced and the approach is considered as an example of the divide and conquer strategy used in many scientific fields to divide a complex problem into a set of simpler problems. Expand

Candidate Elimination Criteria for Lazy Bayesian Rules

- Computer Science
- Australian Joint Conference on Artificial Intelligence
- 2001

This paper explores alternatives to the candidate elimination criterion employed within Lazy Bayesian Rules, demonstrated to provide better overall error reduction than the use of a minimum data subset size criterion. Expand

Semi-naive Bayesian Classification

- 2008

The success and popularity of naive Bayes (NB) has led to a field of research exploring algorithms that seek to retain its numerous strengths while reducing error by alleviating the attribute… Expand

Efficient lazy elimination for averaged one-dependence estimators

- Computer Science
- ICML
- 2006

This work explores a new technique, Lazy Elimination, which eliminates highly related attribute-values at classification time without the computational overheads inherent in wrapper techniques, and shows that LE significantly reduces bias and error without undue computation, while BSE significant reduces bias but not error, with high training time complexity. Expand

A comparative study of Semi-naive Bayes methods in classification learning

- Computer Science
- 2005

Eight typical semi-naive Bayesian learning algorithms are reviewed and error analysis using the bias-variance decomposition on thirty-six natural domains from the UCI Machine Learning Repository is performed. Expand

Alleviating naive Bayes attribute independence assumption by attribute weighting

- Mathematics, Computer Science
- J. Mach. Learn. Res.
- 2013

A weighted naive Bayes algorithm is proposed, called WANBIA, that selects weights to minimize either the negative conditional log likelihood or the mean squared error objective functions and is found to be a competitive alternative to state of the art classifiers like Random Forest, Logistic Regression and A1DE. Expand

A memory efficient semi-Naive Bayes classifier with grouping of cases

- Computer Science
- Intell. Data Anal.
- 2011

The model presented is a competitive classifier with respect to the state of the art of semi-Naive Bayes classifiers, particularly in terms of quality of class probability estimates, but with a much lower memory space complexity. Expand

Not So Naive Bayes: Aggregating One-Dependence Estimators

- Mathematics, Computer Science
- Machine Learning
- 2005

A new approach to weakening the attribute independence assumption by averaging all of a constrained class of classifiers is presented, which delivers comparable prediction accuracy to LBR and Super-Parent TAN with substantially improved computational efficiency at test timerelative to the former and at training time relative to the latter. Expand

#### References

SHOWING 1-10 OF 49 REFERENCES

Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid

- Mathematics, Computer Science
- KDD
- 1996

A new algorithm, NBTree, is proposed, which induces a hybrid of decision-tree classifiers and Naive-Bayes classifiers: the decision-Tree nodes contain univariate splits as regular decision-trees, but the leaves contain Naïve-Bayesian classifiers. Expand

Eecient Learning of Selective Bayesian Network Classiiers

- 1996

In this paper, we present a computation-ally eecient method for inducing selective Bayesian network classiiers. Our approach is to use information-theoretic metrics to ef-ciently select a subset of… Expand

Lazy Decision Trees

- Computer Science
- AAAI/IAAI, Vol. 1
- 1996

This work proposes a lazy decision tree algorithm--LAZYDT--that conceptually constructs the "best" decision tree for each test instance, and is robust with respect to missing values without resorting to the complicated methods usually seen in induction of decision trees. Expand

Adjusted Probability Naive Bayesian Induction

- Computer Science
- Australian Joint Conference on Artificial Intelligence
- 1998

The use of this adjusted value in place of the naive Bayesian probability is shown to significantly improve predictive accuracy. Expand

Improving the Performance of Boosting for Naive Bayesian Classification

- Computer Science
- PAKDD
- 1999

The experimental results show that although introducing tree structures into naive Bayesian classification increases the average error of naiveBayesian classification for individual models, boosting naïve Bayesian classifiers with tree structures can achieve significantly lower average error than the naive Bayesesian classifier. Expand

Induction of Selective Bayesian Classifiers

- Computer Science, Mathematics
- UAI
- 1994

This paper embeds the naive Bayesian induction scheme within an algorithm that carries out a greedy search through the space of features, hypothesize that this approach will improve asymptotic accuracy in domains that involve correlated features without reducing the rate of learning in ones that do not. Expand

A decision-theoretic generalization of on-line learning and an application to boosting

- Computer Science
- EuroCOLT
- 1995

The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand

Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier

- Computer Science
- ICML
- 1996

It is shown that the simple Bayesian classi er (SBC) does not in fact assume attribute independence, and can be optimal even when this assumption is violated by a wide margin, and the previously-assumed region of optimality is a second-order in nitesimal fraction of the actual one. Expand

Learning Limited Dependence Bayesian Classifiers

- Mathematics, Computer Science
- KDD
- 1996

A framework for characterizing Bayesian classification methods is presented and a general induction algorithm is presented that allows for traversal of this spectrum depending on the available computational power for carrying out induction and its application in a number of domains with different properties. Expand

Semi-Naive Bayesian Classifier

- Computer Science
- EWSL
- 1991

In the paper the algorithm of the 'naive' Bayesian classifier (that assumes the independence of attributes) is extended to detect the dependencies between attributes. The idea is to optimize the… Expand