• Corpus ID: 231699042

Measuring Dependence with Matrix-based Entropy Functional

@article{Yu2021MeasuringDW,
  title={Measuring Dependence with Matrix-based Entropy Functional},
  author={Shujian Yu and Francesco Alesiani and Xi Yu and Robert Jenssen and Jos{\'e} Carlos Pr{\'i}ncipe},
  journal={ArXiv},
  year={2021},
  volume={abs/2101.10160}
}
Measuring the dependence of data plays a central role in statistics and machine learning. In this work, we summarize and generalize the main idea of existing information-theoretic dependence measures into a higher-level perspective by the Shearer’s inequality. Based on our generalization, we then propose two measures, namely the matrix-based normalized total correlation and the matrix-based normalized dual total correlation, to quantify the dependence of multiple variables in arbitrary… 
Computationally Efficient Approximations for Matrix-based Renyi's Entropy
TLDR
This work develops randomized approximations to tr( G α ) that transform the trace estimation into matrix-vector multiplications problem and establishes the connection between the matrix-based Renyi’s entropy and PSD matrix approximation, which enables us to exploit both clustering and block low-rank structure of G to further reduce the computational cost.
Understanding Neural Networks with Logarithm Determinant Entropy Estimator
TLDR
This work proposes the LogDet estimator – a reliable matrix-based entropy estimator that approximates Shannon differential entropy that is reliable to estimate entropy in neural networks and finds a functional distinction between shallow and deeper layers.
Optimal Randomized Approximations for Matrix based Renyi's Entropy
TLDR
Stochastic trace approximations for matrix-based R ´ enyi’s entropy with arbitrary α ∈ R + orders are taken, lowering the complexity by converting the entropy approximation to a matrix-vector multiplication problem.
Gated Information Bottleneck for Generalization in Sequential Environments
TLDR
This work empirically demonstrate the superiority of GIB over other popular neural network-based IB approaches in adversarial robustness and out-of-distribution (OOD) detection, and establishes the connection between IB theory and invariant causal representation learning.
Information Bottleneck Theory Based Exploration of Cascade Learning
TLDR
This paper uses an information theoretical approach to understand how Cascade Learning (CL), a method to train deep neural networks layer-by-layer, learns representations, as CL has shown comparable results while saving computation and memory costs.
Information-Theoretic Methods in Deep Neural Networks: Recent Advances and Emerging Opportunities
TLDR
A review on the recent advances and emerging opportunities around the theme of analyzing deep neural networks (DNNs) with information-theoretic methods and their parameterization with DNNs.
Deep Deterministic Independent Component Analysis for Hyperspectral Unmixing
We develop a new neural network based independent component analysis (ICA) method by directly minimizing the dependence amongst all extracted components. Using the matrixbased Rényi’s α-order entropy

References

SHOWING 1-10 OF 83 REFERENCES
Measures of Entropy From Data Using Infinitely Divisible Kernels
TLDR
A framework to nonparametrically obtain measures of entropy directly from data using operators in reproducing kernel Hilbert spaces defined by infinitely divisible kernels is presented and estimators of kernel-based conditional entropy and mutual information are also defined.
Multivariate Extension of Matrix-based Renyi's α-order Entropy Functional
TLDR
This paper defines the matrix-based Renyi's α-order joint entropy among multiple variables and shows how this definition can ease the estimation of various information quantities that measure the interactions amongmultiple variables, such as interactive information and total correlation.
Measuring dependency via intrinsic dimensionality
TLDR
This paper develops a dependency measure between variables based on an extreme-value theoretic treatment of intrinsic dimensionality that theoretically prove a connection between information theory and intrinsicdimensionality theory.
Measuring Statistical Dependence with Hilbert-Schmidt Norms
We propose an independence criterion based on the eigen-spectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm
Copula-based Kernel Dependency Measures
The paper presents a new copula based method for measuring dependence between random variables. Our approach extends the Maximum Mean Discrepancy to the copula of the joint distribution. We prove
Understanding Convolutional Neural Networks With Information Theory: An Initial Exploration
TLDR
It is shown that this functional estimator enables straightforward measurement of information flow in realistic convolutional neural networks (CNNs) without any approximation and the partial information decomposition (PID) framework is introduced and three quantities are developed to analyze the synergy and redundancy in Convolutional layer representations.
CMI: An Information-Theoretic Contrast Measure for Enhancing Subspace Cluster and Outlier Detection
TLDR
A novel contrast score is proposed that quantifies mutual correlations in subspaces by considering their cumulative distributions— without having to discretize the data.
Robust learning with the Hilbert-Schmidt independence criterion
TLDR
This work investigates the use of a non-parametric independence measure, the Hilbert-Schmidt Independence Criterion, as a loss-function for learning robust regression and classification models, and shows that the proposed loss is expected to give rise to models that generalize well on a class of target domains characterised by the complexity of their description within a reproducing kernel Hilbert space.
Unbiased Multivariate Correlation Analysis
TLDR
UMC is a cumulative entropy based non-parametric multi-variate correlation measure, which can capture both linear and non-linear correlations for groups of three or more variables and employs a correction for chance using a statistical model of independence to address the issue of bias.
Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle
TLDR
This theory paper investigates training deep neural networks (DNNs) for classification via minimizing the information bottleneck (IB) functional and concludes that recent successes reported about training DNNs using the IB framework must be attributed to such solutions.
...
...