Learn More
We describe a framework for inducing probabilistic grammars from corpora of positive samples. First, samples are incorporated by adding ad-hoc rules to a working grammar; subsequently, elements of the model (such as states or nonterminals) are merged to achieve generalization and a more compact representation. The choice of what to merge and when to stop is(More)
This paper 1 describes PicHunter, an image retrieval system that implements a novel approach to relevance feedback, such that the entire history of user selections contributes to the system's estimate of the user's goal image. To accomplish this, PicHunter uses Bayesian learning based on a probabilistic model of a user's behavior. The predictions of this(More)
One might imagine that AI systems with harmless goals will be harmless. This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of " drives " that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies(More)
This paper describes a technique for learning both the number of states and the topology of Hidden Markov Models from examples. The induction process starts with the most specific model consistent with the training data and generalizes by successively merging states. Both the choice of states to merge and the stopping criterion are guided by the Bayesian(More)
This report describes a new technique for inducing the structure of Hidden Markov Models from data which is based on the generaìmodel merging' strategy (Omohundro 1992). The process begins with a maximum likelihood HMM that directly encodes the training data. Successively more general models are produced by merging HMM states. A Bayesian posterior(More)
Neural network models are current ly being considered for a wide variety of important computational tasks, particularly those involving imprecise inputs. Th is paper suggests alte rnative algorithms for many of th ese tasks wh ich appear to have much better average performance than standard neural network mo de ls. For example, these algorithms cou ld(More)
Sather 1.0 is a programming language whose design has resulted from the interplay of many criteria. It attempts to support a powerful object-oriented paradigm without sacriicing either the computational performance of traditional procedural languages or support for safety and correctness checking. Much of the engineering eeort went into the design of the(More)
Some of the techniques that appear to be used in biological systems have the flavor of the algorithms described here. Each of the sensory modalities makes use of some form of focus of attention. Presumably this is a mechanism to devote higher level hardware to only a portion of the data produced by lower level systems. In this way a single piece of high(More)