Learning Log-Determinant Divergences for Positive Definite Matrices

@article{Cherian2021LearningLD,
  title={Learning Log-Determinant Divergences for Positive Definite Matrices},
  author={Anoop Cherian and Panagiotis Stanitsas and Jue Wang and Mehrtash Harandi and Vassilios Morellas and Nikolaos Papanikolopoulos},
  journal={IEEE transactions on pattern analysis and machine intelligence},
  year={2021},
  volume={PP}
}
Representations in the form of Symmetric Positive Definite (SPD) matrices have been popularized in a variety of visual learning applications due to their demonstrated ability to capture rich second-order statistics of visual data. There exist several similarity measures for comparing SPD matrices with documented benefits. However, selecting an appropriate measure for a given problem remains a challenge and in most cases, is the result of a trial-and-error process. In this paper, we propose to… 

References

SHOWING 1-10 OF 71 REFERENCES
Learning Discriminative αβ-Divergences for Positive Definite Matrices
TLDR
A discriminative metric learning framework, Information Divergence and Dictionary Learning (IDDL), that not only learns application specific measures on SPD matrices automatically, but also embeds them as vectors using a learned dictionary.
Clustering Positive Definite Matrices by Learning Information Divergences
TLDR
This paper proposes a novel formulation that jointly clusters the input SPD matrices in a K-Means setup and learns a suitable non-linear measure for comparing SPDMatrices, and capitalizes on the recently introduced αβ-logdet divergence, which generalizes a family of popular similarity measures on SPD matrix.
Log-Euclidean Kernels for Sparse Representation and Dictionary Learning
TLDR
This paper proposes a kernel based method for sparse representation (SR) and dictionary learning (DL) of SPD matrices by developing a broad family of kernels that satisfies Mercer's condition and considers the geometric structure in the DL process by updating atom matrices in the Riemannian space.
Riemannian Dictionary Learning and Sparse Coding for Positive Definite Matrices
  • A. Cherian, S. Sra
  • Computer Science
    IEEE Transactions on Neural Networks and Learning Systems
  • 2017
TLDR
This paper forms a novel Riem optimization objective for DLSC, in which the representation loss is characterized via the affine-invariant Riem metric, and presents a computationally simple algorithm for optimizing the model.
Jensen-Bregman LogDet Divergence with Application to Efficient Similarity Search for Covariance Matrices
TLDR
A novel dissimilarity measure for covariances, the Jensen-Bregman LogDet Divergence (JBLD), which enjoys several desirable theoretical properties and at the same time is computationally less demanding (compared to standard measures).
Learning the Information Divergence
TLDR
Experiments on both synthetic and real-world data demonstrate that the automatic selection framework presented can quite accurately select information divergence across different learning problems and various divergence families.
A Riemannian Network for SPD Matrix Learning
TLDR
A Riemannian network architecture is built to open up a new direction of SPD matrix non-linear learning in a deep model and it is shown that the proposed SPD matrix network can be simply trained and outperform existing SPD matrix learning and state-of-the-art methods in three typical visual classification tasks.
Log-Euclidean Metric Learning on Symmetric Positive Definite Manifold with Application to Image Set Classification
TLDR
This paper proposes a novel metric learning approach to work directly on logarithms of SPD matrices by learning a tangent map that can directly transform the matrix Log-Euclidean Metric from the original tangent space to a new tangentspace of more discriminability.
Log-Determinant Divergences Revisited: Alpha-Beta and Gamma Log-Det Divergences
TLDR
This paper establishes links and correspondences among many log-det divergences and display them on alpha-beta plain for various set of parameters and shows also their links to Divergences of multivariate and multiway Gaussian distributions.
Dimensionality Reduction on SPD Manifolds: The Emergence of Geometry-Aware Methods
TLDR
This paper proposes to model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection and shows that learning can be expressed as an optimization problem on a Grassmann manifold and discusses fast solutions for special cases.
...
1
2
3
4
5
...