• Corpus ID: 8930759

Sparse Network Modeling

  title={Sparse Network Modeling},
  author={Moo K. Chung},
  journal={arXiv: Methodology},
  • M. Chung
  • Published 1 August 2020
  • Computer Science
  • arXiv: Methodology
There have been many attempts to identify high-dimensional network features via multivariate approaches. Specifically, when the number of voxels or nodes, denoted as p, are substantially larger than the number of images, denoted as n, it produces an under-determined model with infinitely many possible solutions. The small-n large-p problem is often remedied by regularizing the under-determined system with additional sparse penalties. Popular sparse network models include sparse correlations… 

Figures from this paper


Sparse Brain Network Recovery Under Compressed Sensing
This paper considers the sparse linear regression model with a l1-norm penalty, also known as the least absolute shrinkage and selection operator (LASSO), for estimating sparse brain connectivity, a well-known decoding algorithm in the compressed sensing (CS).
Convex optimization techniques for fitting sparse Gaussian graphical models
This work considers the problem of fitting a large-scale covariance matrix to multivariate Gaussian data in such a way that the inverse is sparse, thus providing model selection, and presents two new algorithms aimed at solving problems with a thousand nodes.
Stable Feature Selection from Brain sMRI
This paper explores a nonnegative generalized fused lasso model for stable feature selection in the diagnosis of Alzheimer's disease and proposes an efficient algorithm by proving a novel link between total variation and fast network flow algorithms via conic duality.
Partial Correlation Estimation by Joint Sparse Regression Models
It is shown that space performs well in both nonzero partial correlation selection and the identification of hub variables, and also outperforms two existing methods.
Exact Topological Inference for Paired Brain Networks via Persistent Homology
A novel framework for characterizing paired brain networks using techniques in hyper-networks, sparse learning and persistent homology is presented and the statistical significance of the heritability index of the large-scale reward network where every voxel is a network node is determined.
Sparse inverse covariance estimation with the graphical lasso.
Using a coordinate descent procedure for the lasso, a simple algorithm is developed that solves a 1000-node problem in at most a minute and is 30-4000 times faster than competing methods.
Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso
For a range of values of λ, this proposal splits a large graphical lasso problem into smaller tractable problems, making it possible to solve an otherwise infeasible large-scale problem.
Persistent Homology in Sparse Regression and Its Application to Brain Morphometry
Analysis of white matter alterations in children who have experienced severe early life stress and maltreatment reveal that stress-exposed children exhibit more diffuse anatomical organization across the whole white matter region.
A Shrinkage Approach to Large-Scale Covariance Matrix Estimation and Implications for Functional Genomics
This work proposes a novel shrinkage covariance estimator that exploits the Ledoit-Wolf (2003) lemma for analytic calculation of the optimal shrinkage intensity and applies it to the problem of inferring large-scale gene association networks.
Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data
This work considers the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse, and presents two new algorithms for solving problems with at least a thousand nodes in the Gaussian case.