Corpus ID: 16234740

Cross-Analysis of Gulf of Bothnia Wild Salmon Rivers Using Bayesian Networks

@inproceedings{Valtonen2002CrossAnalysisOG,
  title={Cross-Analysis of Gulf of Bothnia Wild Salmon Rivers Using Bayesian Networks},
  author={Kimmo Valtonen and Tommi Mononen and Petri Myllym{\"a}ki and Henry Tirri and Jaakko Erkinaro and Erkki Jokikokko and Sakari Kuikka and Atso Romakkaniemi and L. Karlsson and Ingemar Per{\"a}},
  year={2002}
}
We present a methodology allowing the transfer of knowledge from a wild salmon river to another via a predictive model for the chosen population status indicator. From the management point of view, the production of wild smolts is the most important of such indicators. However, in our real-world data from Finnish and Swedish Gulf of Bothnia rivers we only have data on the number of wild smolts available for two of the rivers, making the direct empirical learning and validation of models learned… Expand

References

SHOWING 1-10 OF 13 REFERENCES
Predicting the Wild Salmon Production Using Bayesian Networks
TLDR
This work presents a methodology allowing the prediction of the number of wild smolts in a river in a consistent and well-defined fashion, and highlights the role of the loss function in modeling. Expand
On Supervised Selection of Bayesian Networks
TLDR
The results demonstrate that the marginal likelihood score does not perform well for supervised model selection, while the best results are obtained by using Dawid's prequential approach. Expand
Learning Bayesian Networks: The Combination of Knowledge and Statistical Data
TLDR
A methodology for assessing informative priors needed for learning Bayesian networks from a combination of prior knowledge and statistical data is developed and how to compute the relative posterior probabilities of network structures given data is shown. Expand
Classifier Learning with Supervised Marginal Likelihood
TLDR
Diagnostic Bayesian network classifters where the significant model parameters represent conditional distributions for the class variable, given the values of the predictor variables are considered, in which case the supervised marginal likelihood can be computed in linear time with respect to the data. Expand
Bayesian Network Classifiers
TLDR
Tree Augmented Naive Bayes (TAN) is single out, which outperforms naive Bayes, yet at the same time maintains the computational simplicity and robustness that characterize naive Baye. Expand
Comparing Prequential Model Selection Criteria in Supervised Learning of Mixture Models
TLDR
The empirical results demonstrate that with the prequential approach it is quite easy to build predictive models that are signi cantly more accurate classi ers than the models found by the standard unsupervised marginal likelihood criterion. Expand
Properties of diagnostic data distributions.
  • A. Dawid
  • Computer Science, Medicine
  • Biometrics
  • 1976
TLDR
It is argued that the prevailing paradigm of diagnostic statistics, which concentrates on incidence of symptoms for given disease, is largely inappropriate and should be replaced by an emphasis on diagnostic distributions, and the generalized logistic model is seen to fit naturally into the new framework. Expand
Efficient Range Partitioning in Classification Learning
TLDR
This thesis examines the problem of partitioning ordered value ranges into two or more subsets, optimally with respect to an evaluation function, and presents a comprehensive experimental comparison between the binary splitting, optimal mult isplitting and heuristic multisplitting strategies using two well-known evaluation functions. Expand
Theory Refinement on Bayesian Networks
TLDR
Algorithms for refinement of Bayesian networks are presented to illustrate what is meant by "partial theory", "alternative theory representation", etc, and are an incremental variant of batch learning algorithms from the literature so can work well in batch and incremental mode. Expand
Probabilistic reasoning in intelligent systems - networks of plausible inference
  • J. Pearl
  • Computer Science
  • Morgan Kaufmann series in representation and reasoning
  • 1989
TLDR
The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. Expand
...
1
2
...