Learn More
Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a(More)
  • David W. Aha
  • International Journal of Man-Machine Studies
  • 1992
Incremental variants of the nearest neighbor algorithm are a potentially suitable choice for incremental learning tasks. They have fast learning rates, low updating costs, and have recorded comparatively high classification accuracies in several applications. Although the nearest neighbor algorithm suffers from high storage requirements, modifications exist(More)
Most empirical evaluations of machine learning algorithms are case studies { evaluations of multiple algorithms on multiple databases. Authors of case studies implicitly or explicitly hypothesize that the pattern of their results, which often suggests that one algorithm performs signiicantly better than others, is not limited to the small number of(More)
Many lazy learning algorithms are derivatives of the k-nearest neighbor (k-NN) classifier, which uses a distance function to generate predictions from stored instances. Several studies have shown that k-NN's performance is highly sensitive to the definition of its distance function. Many k-NN variants have been proposed to reduce this sensitivity by(More)
Several recent machine learning publications demonstrate the utility of using feature selection algorithms in supervised learning tasks. Among these, sequential feature selection algorithms are receiving attention. The most frequently studied variants of these algorithms are forward and backward sequential selection. Many studies on supervised learning with(More)
Case based learning CBL algorithms are CBR systems that focus on the topic of learning This paper notes why CBL algorithms are good choices for many supervised learning tasks describes a framework for CBL algorithms outlines a progression of CBL algorithms for tackling learning applications characterized by challenging problems i e noisy cases poor(More)
We previously introduced an exemplar model, named GCM-ISW, that exploits a highly exible weighting scheme. Our simulations showed that it records faster learning rates and higher asymptotic accuracies on several artiicial categorization tasks than models with more limited abilities to warp input spaces. This paper extends our previous work; it describes(More)
While several researchers have applied case-based reasoning techniques to games, only Ponsen and Spronck (2004) have addressed the challenging problem of learning to win real-time games. Focusing on WARGUS, they report good results for a genetic algorithm that searches in plan space, and for a weighting algorithm (dynamic scripting) that biases subplan(More)
Conversational case-based reasoning (CBR) shells (e.g., In-ference's CBR Express) are commercially successful tools for supporting the development of help desk and related applications. In contrast to rule-based expert systems, they capture knowledge as cases rather than more problematic rules, and they can be incrementally extended. However , rather than(More)