• Corpus ID: 3671018

Deep Information Networks

@article{Franzese2018DeepIN,
  title={Deep Information Networks},
  author={Giulio Franzese and Monica Visintin},
  journal={ArXiv},
  year={2018},
  volume={abs/1803.02251}
}
We describe a novel classifier with a tree structure, designed using information theory concepts. This Information Network is made of information nodes, that compress the input data, and multiplexers, that connect two or more input nodes to an output node. Each information node is trained, independently of the others, to minimize a local cost function that minimizes the mutual information between its input and output with the constraint of keeping a given mutual information between its output… 

Figures and Tables from this paper

Probabilistic Ensemble of Deep Information Networks

This work describes a classifier made of an ensemble of decision trees, designed using information theory concepts, that is able to provide results comparable to those of the tree classifier in terms of accuracy, while it shows many advantages interms of modularity, reduced complexity, and memory requirements.

Information networks – concept, classification and application

The aim of this paper is to offer different approaches in defining and classifying general forms of information networks and to notice their wide application in different research disciplines.

Potential Uses in Breadth

This chapter overviews a dozen knowledge representation (KR) possibilities in breadth and shows the benefits of organizing the authors' knowledge structures using Peirce’s universal categories and typologies.

References

SHOWING 1-6 OF 6 REFERENCES

Deep learning and the information bottleneck principle

It is argued that both the optimal architecture, number of layers and features/connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer.

The information bottleneck method

The variational principle provides a surprisingly rich framework for discussing a variety of problems in signal processing and learning, as will be described in detail elsewhere.

C4.5: Programs for Machine Learning

A complete guide to the C4.5 system as implemented in C for the UNIX environment, which starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting.

Programs for Machine Learning

In his new book, C4.5: Programs for Machine Learning, Quinlan has put together a definitive, much needed description of his complete system, including the latest developments, which will be a welcome addition to the library of many researchers and students.

Elements of Information Theory

The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.

Detection of Chronic Kidney Disease and Selecting Important Predictive Attributes

  • A. SalekinJ. Stankovic
  • Computer Science
    2016 IEEE International Conference on Healthcare Informatics (ICHI)
  • 2016
This study considers 24 predictive parameters and creates a machine learning classifier to detect CKD, which achieves a detection accuracy of 0.993 according to the F1-measure with 0.1084 root mean square error.