Nathan Fortier

Learn More
In this paper we propose several approximation algorithms for the problems of full and partial abductive inference in Bayesian belief networks. Full abductive inference is the problem of finding the k most probable state assignments to all non-evidence variables in the network while partial abductive inference is the problem of finding the k most probable(More)
—Bayesian networks are powerful probabilistic models that have been applied to a variety of tasks. When applied to classification problems, Bayesian networks have shown competitive performance when compared to other state-of-the-art classifiers. However, structure learning of Bayesian networks has been shown to be NP-Hard. In this paper, we propose a novel(More)
—Abductive inference in Bayesian networks, is the problem of finding the most likely joint assignment to all non-evidence variables in the network. Such an assignment is called the most probable explanation (MPE). A novel swarm-based algorithm is proposed that finds the k-MPE of a Bayesian network. Our approach is an overlapping swarm intelligence algorithm(More)
Bayesian networks are probabilistic graphical models that have proven to be able to handle uncertainty in many real-world applications. One key issue in learning Bayesian networks is parameter estimation, i.e., learning the local conditional distributions of each variable in the model. While parameter estimation can be performed efficiently when complete(More)
—IEEE Std 1232-2010 Standard for Artificial Intelligence Exchange and Service Tie to All Test Environments (AI-ESTATE) provides a standardized approach to modeling and control of a data-driven diagnostic and prognostic reasoning environment. This work continues a previous effort to build wider acceptance and extension of this standard through development of(More)
  • 1