Corpus ID: 919497

Dynamic bayesian networks: representation, inference and learning

@inproceedings{Murphy2002DynamicBN,
  title={Dynamic bayesian networks: representation, inference and learning},
  author={Kevin P. Murphy and Stuart J. Russell},
  year={2002}
}
Dynamic Bayesian Networks: Representation, Inference and Learning by Kevin Patrick Murphy Doctor of Philosophy in Computer Science University of California, Berkeley Professor Stuart Russell, Chair Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have been… Expand

Figures, Tables, and Topics from this paper

Statistical Inference in Graphical Models
TLDR
The mathematical foundations of graphical models and statistical inference are described, focusing on the concepts and techniques that are most useful to the problem of decision making in dynamic systems under uncertainty. Expand
Efficient Inference For Hybrid Bayesian Networks
TLDR
This dissertation focuses on the hybrid Bayesian networks containing both discrete and continuous random variables and presents an approximate analytical method to estimate the performance bound, which can help the decision maker to understand the prediction performance of a BN model without extensive simulation. Expand
Supervised Learning in Dynamic Bayesian Networks
TLDR
This work derives supervised learning algorithms for parameter estimation and inference of latent variables in two commonly used DBN models for such time series, namely the switching vector autoregressive (SVAR) model and the switching Kalman filter (SKF). Expand
Bayesian networks for mathematical models: Techniques for automatic construction and efficient inference
TLDR
By incorporating knowledge in the form of an existing ODE model, a DBN framework is built for efficiently predicting individualised patient responses using the available bedside and lab data. Expand
Approximate inference for dynamic Bayesian networks: sliding window approach
TLDR
A sliding window framework for approximate inference in DBNs to reduce the computational burden by introducing a sliding window that moves forward as time progresses, which means inference at any time is restricted to a quite narrow region of the network. Expand
Directly Learning Tractable Models for Sequential Inference and DecisionMaking
TLDR
Two new probabilistic graphical models are presented: Dynamic Sum-product Networks (DynamicSPNs) and Decision Sum-Product-Max Networks (DecisionSPMNs), where the former is suitable for problems with sequence data of varying length and the latter is for problemsWith random, decision, and utility variables. Expand
Learning the structure of dynamic Bayesian networks from time series and steady state measurements
TLDR
Simulation results demonstrate that dynamic network structures can be learned to an extent from steady state measurements alone and that inference from a combination of steady state and time series data has the potential to improve learning performance relative to the inference from time seriesData alone. Expand
Approximated Probabilistic Inference on a Dynamic Bayesian Network Using a Multistate Neural Network
TLDR
This work proposes a new heuristic algorithm for probabilistic inference on the DBN using a multistate neural network, which supports a bottom-up error-reporting mechanism against top-down predictions. Expand
A New Algorithm for Modeling and Inferring User's Knowledge by Using Dynamic Bayesian Network
TLDR
The new algorithm is proposed that both the size of DBN and the number of Conditional Probability Tables (CPT) in DBN are kept intact (not changed) when the process continues for a long time and solves the problem of temporary slip and lucky guess. Expand
Dynamic Bayesian Networks
TLDR
This chapter considers more complex models of sequential data, and focuses on dynamic Bayesian networks, which can be applied to temporal models, but can also be used for sequential learning of static models, which is useful if the data is non-stationary or too large for batch methods. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 444 REFERENCES
Constant-space reasoning in dynamic Bayesian networks
  • Adnan Darwiche
  • Computer Science, Mathematics
  • Int. J. Approx. Reason.
  • 2001
TLDR
One of the main algorithms for achieving constant-space complexity of Dynamic Bayesian networks is studied, based on “slice-by-slice” elimination orders, and improvements on it are suggested based on new classes of elimination orders. Expand
Speech Recognition with Dynamic Bayesian Networks
TLDR
This thesis shows that dynamic Bayesian networks can be used effectively in the field of automatic speech recognition, and presents inference routines that are especially tailored to the requirements of speech recognition: efficient inference with deterministic constraints, variable-length utterances, and online inference. Expand
Probabilistic Independence Networks for Hidden Markov Probability Models
TLDR
It is shown that the well-known forward-backward and Viterbi algorithms for HMMs are special cases of more general inference algorithms for arbitrary PINs and the existence of inference and estimation algorithms for more general graphical models provides a set of analysis tools for HMM practitioners who wish to explore a richer class of HMM structures. Expand
Approximate Learning of Dynamic Models
TLDR
It is shown empirically that, for a real-life domain, EM using the authors' inference algorithm is much faster than EM using exact inference, with almost no degradation in quality of the learned model. Expand
Adaptive Probabilistic Networks with Hidden Variables
TLDR
This paper presents a gradient-based algorithm and shows that the gradient can be computed locally, using information that is available as a byproduct of standard inference algorithms for probabilistic networks. Expand
Factorial Hidden Markov Models
TLDR
A generalization of HMMs in which this state is factored into multiple state variables and is therefore represented in a distributed manner, and a structured approximation in which the the state variables are decoupled, yielding a tractable algorithm for learning the parameters of the model. Expand
Gaussian Process Networks
TLDR
The Bayesian score of Gaussian Process Networks is developed and described how to learn them from data and empirical results on artificial data as well as on real-life domains with non-linear dependencies are presented. Expand
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks
TLDR
It is shown that Rao-Blackwellised particle filters (RBPFs) lead to more accurate estimates than standard PFs, and are demonstrated on two problems, namely non-stationary online regression with radial basis function networks and robot localization and map building. Expand
Inference in Hybrid Networks: Theoretical Limits and Practical Algorithms
TLDR
This paper proves that even if the CLG is restricted to an extremely simple structure of a polytree, the inference task is NP-hard, and provides complexity resuits for an important class of CLGs, which includes Switching Kalman Filters. Expand
Learning Bayesian Networks: The Combination of Knowledge and Statistical Data
TLDR
A methodology for assessing informative priors needed for learning Bayesian networks from a combination of prior knowledge and statistical data is developed and how to compute the relative posterior probabilities of network structures given data is shown. Expand
...
1
2
3
4
5
...