#### Filter Results:

- Full text PDF available (15)

#### Publication Year

1989

2014

- This year (0)
- Last 5 years (2)
- Last 10 years (4)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Kamal M. Ali, Michael J. Pazzani
- Machine Learning
- 1996

Learning multiple descriptions for each class in the data has been shown to reduce generalization error but the amount of error reduction varies greatly from domain to domain. This paper presents a novel empirical analysis that helps to understand this variation. Our hypothesis is that the amount of error reduction is linked to the “degree to which the… (More)

We explore algorithms for learning classification procedures that attempt to minimize the cost of misclassifying examples. First, we consider inductive learning of classification rules. The Reduced Cost Ordering algorithm, a new method for creating a decision list (i.e., an ordered set of rules) is described and compared to a variety of inductive learning… (More)

- Kamal M. Ali, Michael J. Pazzani
- IJCAI
- 1993

Many learning algorithms form concept descriptions composed of clauses, each of which covers some proportion of the positive training data and a small to zero proportion of the negative training data. This paper presents a method using likelihood ratios attached to clauses to classify test examples. One concept description is learned for each class. Each… (More)

Concept learners that learn concept descriptions consisting of rules have been shown to be prone to the small disjuncts problem (Holte et al., 1989). This is the problem where a large proportion of the overall classi cation error made by the concept description on an independent test set can be attributed to rules which were true for a small number of… (More)

- Kamal M. Ali, Michael J. Pazzani
- AISTATS
- 1995

We present a way of approximating the posterior probability of a rule-set model that is comprised of a set of class descriptions. Each class description, in turn, consists of a set of relational rules. The ability to compute this posterior and to learn many models from the same training set allows us to approximate the expectation that an example to be… (More)

Machine learning algorithms that perform classification tasks create concept descriptions (e.g., rules, decision trees, or weights on a neural net) guided by a set of classified training examples. The accuracy of the resulting concept description can be evaluated empirically by asking a classifier to use the concept description to classify a set of test… (More)

- Kamal M. Ali, Clifford Brunk, Michael J. Pazzani
- ICTAI
- 1994

In sparse data environments, greater classi cation accuracy can be achieved by learning several concept descriptions of the data and combining their classi cations. Stochastic search is a general tool which can be used to generate many good concept descriptions (rule sets) for each class in the data. Bayesian probability theory o ers an optimal strategy for… (More)

- Vipin Kumar, Michael J. Way, +25 authors Huan Liu
- 2014

This series aims to capture new developments and applications in data mining and knowledge discovery, while summarizing the computational tools and techniques useful in data analysis. This series encourages the integration of mathematical, statistical, and computational methods and techniques through the publication of a broad range of textbooks, reference… (More)

- Kamal M. Ali
- ML
- 1989

- Kamal M. Ali, Michael J. Pazzani
- International Journal on Artificial Intelligence…
- 1995

For learning tasks with few examples, greater classi cation accuracy can be achieved by learning several concept descriptions for each class in the data and producing a classi cation that combines evidence from multiple descriptions. Stochastic (randomized) search can be used to generate many concept descriptions for each class. Here we use a tractable… (More)