Chentsov’s theorem for exponential families

@article{Dowty2017ChentsovsTF,
  title={Chentsov’s theorem for exponential families},
  author={James G. Dowty},
  journal={Information Geometry},
  year={2017},
  volume={1},
  pages={117-135}
}
  • J. Dowty
  • Published 2017
  • Mathematics, Computer Science
  • Information Geometry
Chentsov’s theorem characterizes the Fisher information metric on statistical models as the only Riemannian metric (up to rescaling) that is invariant under sufficient statistics. This implies that each statistical model is equipped with a natural geometry, so Chentsov’s theorem explains why many statistical properties can be described in geometric terms. However, despite being one of the foundational theorems of statistics, Chentsov’s theorem has only been proved previously in very restricted… Expand
New Geometry of Parametric Statistical Models
TLDR
This work constructs a statistical manifold admitting torsion (SMAT), and shows that \(\mathbb M\) is dually flat if and only if torsions of the conjugate connection vanishes. Expand
Information geometry
Information geometry has emerged from the study of the invariant structure in families of probability distributions. This invariance uniquely determines a second-order symmetric tensor g andExpand
Congruent families and invariant tensors
Classical results of Chentsov and Campbell state that -- up to constant multiples -- the only $2$-tensor field of a statistical model which is invariant under congruent Markov morphisms is the FisherExpand
Clustering in Hilbert simplex geometry
TLDR
Hilbert metric in the probability simplex satisfies the property of information monotonicity, and since a canonical Hilbert metric distance can be defined on any bounded convex subset of the Euclidean space, this work considers Hilbert's projective geometry of the elliptope of correlation matrices and study its clustering performances. Expand
From Hessian to Weitzenböck: manifolds with torsion-carrying connections
We investigate affine connections that have zero curvature but not necessarily zero torsion. Slightly generalizing from what is known as Weitzenböck connections, such non-flat connections (we callExpand
A characterization of the alpha-connections on the statistical manifold of normal distributions
We show that the statistical manifold of normal distributions is homogeneous. In particular, it admits a $2$-dimensional solvable Lie group structure. In addition, we give a geometricExpand
Relative Fisher Information and Natural Gradient for Learning Large Modular Models
TLDR
This paper extracts a local component from a large neural system, and defines its relative Fisher information metric that describes accurately this small component, and is invariant to the other parts of the system. Expand
Generalization of the maximum entropy principle for curved statistical manifolds
The maximum entropy principle (MEP) is one of the most prominent methods to investigate and model complex systems. Despite its popularity, the standard form of the MEP can only generateExpand
Invariant Metric Under Deformed Markov Embeddings with Overlapped Supports
Due to Cencov's theorem, there exists a unique family of invariant $(0,2)$-tensors on the space of positive probability measures on a set of $n$-points indexed by $n\in \mathbb{N}$ under MarkovExpand

References

SHOWING 1-10 OF 19 REFERENCES
Information geometry and sufficient statistics
Information geometry provides a geometric approach to families of statistical models. The key geometric structures are the Fisher quadratic form and the Amari–Chentsov tensor. In statistics, theExpand
An extended Čencov characterization of the information metric
Cencov has shown that Riemannian metrics which are derived from the Fisher information matrix are the only metrics which preserve inner products under certain probabilistically important mappings. InExpand
An Infinite-Dimensional Geometric Structure on the Space of all the Probability Measures Equivalent to a Given One
Let M μ be the set of all probability densities equivalent to a given reference probability measure μ. This set is thought of as the maximal regular (i.e., with strictly positive densities)Expand
Geometrical Foundations of Asymptotic Inference
Overview and Preliminaries. ONE-PARAMETER CURVED EXPONENTIAL FAMILIES. First-Order Asymptotics. Second-Order Asymptotics. MULTIPARAMETER CURVED EXPONENTIAL FAMILIES. Extensions of Results from theExpand
The uniqueness of the Fisher metric as information metric
We define a mixed topology on the fiber space $$\cup _\mu \oplus ^n L^n(\mu )$$∪μ⊕nLn(μ) over the space $${\mathcal M}({\Omega })$$M(Ω) of all finite non-negative measures $$\mu $$μ on a separableExpand
Limit Distributions for Sums of Independent Random Vectors: Heavy Tails in Theory and Practice
Preface. Acknowledgments. INTRODUCTION. Random Vectors. Linear Operators. Infinitely Divisible Distributions and Triangular Arrays. MULTIVARIATE REGULAR VARIATION. Regular Variations for LinearExpand
Uniqueness of the Fisher-Rao metric on the space of smooth densities
On a closed manifold of dimension greater than one, every smooth weak Riemannian metric on the space of smooth positive probability densities, that is invariant under the action of the diffeomorphismExpand
Introduction to the theory of distributions
1. Test functions and distributions 2. Differentiation and multiplication 3. Distributions and compact support 4. Tensor products 5. Convolution 6. Distribution kernels 7. Co-ordinate transforms andExpand
Fisher information and stochastic complexity
  • J. Rissanen
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 1996
TLDR
A sharper code length is obtained as the stochastic complexity and the associated universal process are derived for a class of parametric processes by taking into account the Fisher information and removing an inherent redundancy in earlier two-part codes. Expand
The Minimum Description Length Principle in Coding and Modeling
TLDR
The normalized maximized likelihood, mixture, and predictive codings are each shown to achieve the stochastic complexity to within asymptotically vanishing terms. Expand
...
1
2
...