Sandra Zilles

Learn More
While most supervised machine learning models assume that training examples are sampled at random or adversarially, this article is concerned with models of learning from a cooperative teacher that selects “helpful” training examples. The number of training examples a learner needs for identifying a concept in a given class C of possible target concepts(More)
This paper is concerned with various combinatorial parameters of classes that can be learned from a small set of examples. We show that the recursive teaching dimension, recently introduced by Zilles et al. (2008), is strongly connected to known complexity notions in machine learning, e.g., the self-directed learning complexity and the VC-dimension. To the(More)
We investigate the use of machine learning to create effective heuristics for search algorithms such as IDA* or heuristic-search planners such as FF. Our method aims to generate a sequence of heuristics from a given weak heuristic h0 and a set of unsolved training instances using a bootstrapping procedure. The training instances that can be solved using h0(More)
This paper is concerned with the combinatorial structure of concept classes that can be learned from a small number of examples. We show that the recently introduced notion of recursive teaching dimension (RTD, reflecting the complexity of teaching a concept class) is a relevant parameter in this context. Comparing the RTD to self-directed learning, we(More)
In the past 40 years, research on inductive inference has developed along different lines, e.g., in the formalizations used, and in the classes of target concepts considered. One common root of many of these formalizations is Gold’s model of identification in the limit. This model has been studied for learning recursive functions, recursively enumerable(More)
Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Gold’s (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the mode of convergence, the requirements intermediate(More)
Abstraction is a powerful technique for speeding up planning and search. A problem that can arise in using abstraction is the generation of abstract states, called spurious states, from which the goal state is reachable in the abstract space but for which there is no corresponding state in the original space from which the goal state can be reached.(More)
Korf, Reid, and Edelkamp launched a line of research aimed at predicting how many nodes IDA* will expand with a given cost bound. This paper advances this line of research in three ways. First, we identify a source of prediction error that has hitherto been overlooked. We call it the “discretization effect”. Second, we disprove the intuitively appealing(More)
The problem of how a teacher and a learner can cooperate in the process of learning concepts from examples in order to minimize the required sample size without “coding tricks” has been widely addressed, yet without achieving teaching and learning protocols that meet what seems intuitively an optimal choice for selecting samples in teaching. We introduce(More)