Parameter Learning of Bayesian Network Classifiers Under Computational Constraints

@inproceedings{Tschiatschek2015ParameterLO,
  title={Parameter Learning of Bayesian Network Classifiers Under Computational Constraints},
  author={Sebastian Tschiatschek and Franz Pernkopf},
  booktitle={ECML/PKDD},
  year={2015}
}
We consider online learning of Bayesian network classifiers (BNCs) with reduced-precision parameters, i.e. the conditional-probability tables parameterizing the BNCs are represented by low bit-width fixed-point numbers. In contrast to previous work, we analyze the learning of these parameters using reduced-precision arithmetic only which is important for computationally constrained platforms, e.g. embedded- and ambient-systems, as well as power-aware systems. This requires specialized… Expand
Efficient and Robust Machine Learning for Real-World Systems
TLDR
An extensive overview of the current state-of-the-art of robust and efficient machine learning for real-world systems with focus on techniques for model size reduction, compression and reduced precision is provided. Expand
Feature Selection with Limited Bit Depth Mutual Information for Embedded Systems
TLDR
This work considers mutual information—one of the most common measures of dependence used in feature selection algorithms—with reduced precision parameters in machine learning algorithms with limited number of bits. Expand
Feature selection with limited bit depth mutual information for portable embedded systems
TLDR
Experimental results demonstrate that low bit representations are sufficient to achieve performances close to that of double precision parameters and thus open the door for the use of feature selection in embedded platforms that minimize the energy consumption and carbon emissions. Expand
Bayesian Networks: A State-Of-The-Art Survey
TLDR
This paper presents a review and classification scheme for recent researches on Bayesian Networks by reviewing relevant articles published in the recent years, and indicates under-researched areas as well as future directions. Expand
Representation Learning for Single-Channel Source Separation and Bandwidth Extension
TLDR
GSNs obtain the best PESQ and overall perceptual score on average in all four tasks, and frame-wise GSNs are able to reconstruct the missing frequency bands in ABE best, measured in frequency-domain segmental SNR. Expand

References

SHOWING 1-10 OF 27 REFERENCES
On Bayesian Network Classifiers with Reduced Precision Parameters
TLDR
This work investigates the quantization of the parameters of BNCs with discrete valued nodes including the implications on the classification rate (CR), and derives worst-case and probabilistic bounds on the CR for different bit-widths. Expand
Bayesian Network Classifiers with Reduced Precision Parameters
TLDR
This paper investigates the effect of precision reduction of the parameters on the classification performance of Bayesian network classifiers and indicates that BNCs with discriminatively optimized parameters are almost as robust to precision reduction as BNC’s with generatively optimize parameters. Expand
Maximum Margin Bayesian Network Classifiers
TLDR
It is shown that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first. Expand
Bounds for Bayesian network classifiers with reduced precision parameters
TLDR
This paper derives worst-case and best-case bounds on the classification rate using interval arithmetic and determines performance bounds that hold with a user specified confidence using quantization theory. Expand
Bayesian Network Classifiers
TLDR
Tree Augmented Naive Bayes (TAN) is single out, which outperforms naive Bayes, yet at the same time maintains the computational simplicity and robustness that characterize naive Baye. Expand
On Discriminative Bayesian Network Classifiers and Logistic Regression
TLDR
This work shows that the same fact holds for much more general Bayesian network models, as long as the corresponding network structure satisfies a certain graph-theoretic property, as well as providing a heuristic strategy for pruning the number of parameters and relevant features in such models. Expand
Maximum Margin Bayesian Networks
TLDR
An effective training algorithm is derived that solves the maximum margin training problem for a range of Bayesian network topologies, and converges to an approximate solution for arbitrary networkTopologies. Expand
Discriminative parameter learning for Bayesian networks
TLDR
A simple, efficient, and effective discriminative parameter learning method, called Discriminative Frequency Estimate (DFE), which learns parameters by discriminatively computing frequencies from data. Expand
The Most Generative Maximum Margin Bayesian Networks
TLDR
A novel approach of hybrid generative-discriminative learning for Bayesian networks is introduced, using an SVM-type large margin formulation for discriminative training, introducing a likelihood-weighted l1-norm for the S VM-norm-penalization, which simultaneously optimizes the data likelihood and therefore partly maintains the generative character of the model. Expand
Integer Bayesian Network Classifiers
This paper introduces integer Bayesian network classifiers (BNCs), i.e. BNCs with discrete valued nodes where parameters are stored as integer numbers. These networks allow for efficientExpand
...
1
2
3
...