Learn More
Latent class models are used for cluster analysis of categorical data. Underlying such a model is the assumption that the observed variables are mutually independent given the class variable. A serious problem with the use of latent class models, known as local dependence, is that this assumption is often untrue. In this paper we propose hierarchical latent(More)
Most exact algorithms for general partially observable Markov decision processes (pomdps) use a form of dynamic programming in which a piecewise-linear and convex representation of one value function is transformed into another. We examine variations of the \incremental pruning" method for solving this problem and compare them to earlier algorithms from(More)
Bayesian belief networks have grown to prominence because they provide compact representations for many problems for which probabilistic inference is appropriate, and there are algorithms to exploit this compactness. The next step is to allow compact representations of the conditional probabilities of a variable given its parents. In this paper we present(More)
A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional(More)
Partially observable Markov decision processes (POMDPs) have recently become popular among many AI researchers because they serve as a natural model for planning under uncertainty. Value iteration is a well-known algorithm for nding optimal policies for POMDPs. It typically takes a large number of iterations to converge. This paper proposes a method for(More)
Context-specific independence (CSI) refers to conditional independencies that are true only in specific contexts. It has been found useful in various inference algorithms for Bayesian networks. This paper studies the role of CSI in general. We provide a characterization of the computational leverages offered by CSI without referring to particular inference(More)
The naive Bayes model makes the often unrealistic assumption that the feature variables are mutually independent given the class variable. We interpret a violation of this assumption as an indication of the presence of latent variables, and we show how latent variables can be detected. Latent variable discovery is interesting, especially for medical(More)
This paper is about reducing influence dia­ gram (ID) evaluation into Bayesian network (BN) inference problems. Such reduction is interesting because it enables one to read­ ily use one's favorite BN inference algorithm to efficiently evaluate IDs. Two such reduc­ tion methods have been proposed previously paper proposes a new method. The BN in­ ference(More)