Learn More
— A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of(More)
—We introduce a definition of the notion of com-pressibility for infinite deterministic and i.i.d. random sequences which is based on the asymptotic behavior of truncated sub-sequences. For this purpose, we use asymptotic results regarding the distribution of order statistics for heavy-tail distributions and their link with ↵-stable laws for 1 < ↵ < 2. In(More)
—In this paper we establish the connection between the Orthogonal Optical Codes (OOC) and binary compressed sensing matrices. We also introduce deterministic bipolar m × n RIP fulfilling ±1 matrices of order k such that m ≤ O k(log 2 n) log 2 k ln log 2 k. The columns of these matrices are binary BCH code vectors where the zeros are replaced by −1. Since(More)
—We investigate a stochastic signal-processing framework for signals with sparse derivatives, where the samples of a Lévy process are corrupted by noise. The proposed signal model covers the well-known Brownian motion and piecewise-constant Poisson process; moreover, the Lévy family also contains other interesting members exhibiting heavy-tail statistics(More)
We consider the problem of community detection in a network, that is, partitioning the nodes into groups that, in some sense, reveal the structure of the network. Many algorithms have been proposed for fitting network models with communities, but most of them do not scale well to large networks, and often fail on sparse networks. We present a fast(More)
This paper is devoted to the characterization of an extended family of CARMA (continuous-time autoregressive moving average) processes that are solutions of stochastic differential equations driven by white Lévy innovations. These are completely specified by: (1) a set of poles and zeros that fixes their correlation structure, and (2) a canonical(More)
—We introduce a new method for adaptive one-bit quantization of linear measurements and propose an algorithm for the recovery of signals based on generalized approximate message passing (GAMP). Our method exploits the prior statistical information on the signal for estimating the minimum-mean-squared error solution from one-bit measurements. Our approach(More)
—In this paper, the problem of matrix rank minimization under affine constraints is addressed. The state-of-the-art algorithms can recover matrices with a rank much less than what is sufficient for the uniqueness of the solution of this optimization problem. We propose an algorithm based on a smooth approximation of the rank function, which practically(More)
—In this paper we introduce deterministic m × n RIP fulfilling ±1 matrices of order k such that log m log k ≈ log(log 2 n) log(log 2 k). The columns of these matrices are binary BCH code vectors that their zeros are replaced with −1 (excluding the normalization factor). The samples obtained by these matrices can be easily converted to the original sparse(More)
—We consider continuous-time sparse stochastic processes from which we have only a finite number of noisy/noiseless samples. Our goal is to estimate the noiseless samples (denoising) and the signal in-between (interpolation problem). By relying on tools from the theory of splines, we derive the joint a priori distribution of the samples and show how this(More)