• Corpus ID: 146808335

Learning Clique Forests

@article{Massara2019LearningCF,
  title={Learning Clique Forests},
  author={Guido Previde Massara and Tomaso Aste},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.02266}
}
We propose a topological learning algorithm for the estimation of the conditional dependency structure of large sets of random variables from sparse and noisy data. The algorithm, named Maximally Filtered Clique Forest (MFCF), produces a clique forest and an associated Markov Random Field (MRF) by generalising Prim's minimum spanning tree algorithm. To the best of our knowledge, the MFCF presents three elements of novelty with respect to existing structure learning approaches. The first is the… 

Topological regularization with information filtering networks

  • T. Aste
  • Computer Science
    Information Sciences
  • 2022

Investigating the performance of exploratory graph analysis and traditional techniques to identify the number of latent factors: A simulation and tutorial.

Exploratory graph analysis (EGA) is a new technique that was recently proposed within the framework of network psychometrics to estimate the number of factors underlying multivariate data. Unlike

Quantifying impact and response in markets using information filtering networks

We present a novel methodology to quantify the ‘impact’ of and ‘response’ to market shocks. We apply shocks to a group of stocks in a part of the market, and we quantify the effects in terms of

Sector Neutral Portfolios: Long Memory Motifs Persistence in Market Structure Dynamics

We study soft persistence (existence in subsequent temporal layers of motifs from the initial layer) of motif structures in Triangulated Maximally Filtered Graphs (TMFG) generated from time-varying

An Information Filtering approach to stress testing: an application to FTSE markets

We present a novel methodology to quantify the ‘impact’ of and ‘response’ to market shocks. We apply shocks to a group of stocks in a part of the market, and we quantify the effects in terms of

Dynamic Portfolio Optimization with Inverse Covariance Clustering

Market conditions change continuously. However, in portfolio’s investment strategies, it is hard to account for this intrinsic non-stationarity. In this paper, we propose to address this issue by

Regime-based Implied Stochastic Volatility Model for Crypto Option Pricing

TLDR
It is demonstrated that MR-ISVM contributes to overcome the burden of complex adaption to jumps in higher order characteristics of option pricing models, which allows the market to price the market based on the expectations of its participants in an adaptive fashion.

Portfolio optimization with sparse multivariate modeling

TLDR
It is found that models with larger out-sample likelihoods lead to better performing portfolios up to when two to three years of daily observations are included in the train set, and that sparse models outperform full-models in that they deliver higher out of sample likelihood, lower realized portfolio volatility and improved portfolios' stability.

References

SHOWING 1-10 OF 103 REFERENCES

Hierarchical Information Clustering by Means of Topologically Embedded Graphs

TLDR
It is found that the application to gene expression patterns of lymphoma samples uncovers biologically significant groups of genes which play key-roles in diagnosis, prognosis and treatment of some of the most relevant human lymphoid malignancies.

Efficient Principled Learning of Thin Junction Trees

We present the first truly polynomial algorithm for PAC-learning the structure of bounded-treewidth junction trees - an attractive subclass of probabilistic graphical models that permits both the

Bayesian structure learning using dynamic programming and MCMC

TLDR
This paper shows how to overcome the first three of these problems by using the DP algorithm as a proposal distribution for MCMC in DAG space, and shows that this hybrid technique converges to the posterior faster than other methods, resulting in more accurate structure learning and higher predictive likelihoods on test data.

Decomposable graphical Gaussian model determination

TLDR
A hyper inverse Wishart prior distribution on the concentration matrix for each given graph is considered, containing only the elements for which the corresponding element of the inverse is nonzero, allowing all computations to be performed locally, at the clique level, which is a clear advantage for the analysis of large and complex datasets.

Network Filtering for Big Data: Triangulated Maximally Filtered Graph

We propose a network-filtering method, the Triangulated Maximally Filtered Graph (TMFG), that provides an approximate solution to the Weighted Maximal Planar Graph problem. The underlying idea of

High-dimensional graphs and variable selection with the Lasso

TLDR
It is shown that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs and is hence equivalent to variable selection for Gaussian linear models.

Learning Markov networks: maximum bounded tree-width graphs

TLDR
The problem of learning a maximum likelihood Markov network given certain observed data can be reduced to the problem of identifying a maximum weight low-treewidth graph under a given input weight function and the first constant factor approximation algorithm is given.

Introduction to Chordal Graphs and Clique Trees, in Graph Theory and Sparse Matrix Computation

Kjjrull U., Triangulation of graph-algorithms giving small total state space. 19 in the number of minimal separators. This possible amendment will resolve a theoretical problem raised by KBMK93,

The Complexity of Distinguishing Markov Random Fields

TLDR
It is proved that the problem of reconstructing bounded-degree models with hidden nodes is hard, and it is impossible to decide in randomized polynomial time if two models generate distributions whose statistical distance is at most 1/3 or at least 2/3.

Statistical Learning with Sparsity: The Lasso and Generalizations

TLDR
Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data and extract useful and reproducible patterns from big datasets.
...