#### Filter Results:

- Full text PDF available (236)

#### Publication Year

1996

2017

- This year (12)
- Last 5 years (93)
- Last 10 years (181)

#### Publication Type

#### Co-author

#### Publication Venue

#### Brain Region

#### Cell Type

#### Data Set Used

#### Key Phrases

Learn More

- Martin J. Wainwright, Michael I. Jordan
- Foundations and Trends in Machine Learning
- 2008

- Alexandros G. Dimakis, Brighten Godfrey, Yunnan Wu, Martin J. Wainwright, Kannan Ramchandran
- IEEE Transactions on Information Theory
- 2007

Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the… (More)

- Javier Portilla, Vasily Strela, Martin J. Wainwright, Eero P. Simoncelli
- IEEE Trans. Image Processing
- 2003

We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter… (More)

- Jon Feldman, Martin J. Wainwright, David R. Karger
- IEEE Transactions on Information Theory
- 2005

A new method is given for performing approximate maximum-likelihood (ML) decoding of an arbitrary binary linear code based on observations received from any discrete memoryless symmetric channel. The decoding algorithm is based on a linear programming (LP) relaxation that is defined by a factor graph or parity-check representation of the code. The resulting… (More)

Given i.i.d. observations of a random vector X ∈ R p , we study the problem of estimating both its covariance matrix Σ * , and its inverse covariance or concentration matrix Θ * = (Σ *) −1. When X is multivari-ate Gaussian, the non-zero structure of Θ * is specified by the graph of an associated Gaussian Markov random field; and a popular estimator for such… (More)

High-dimensional statistical inference deals with models in which the the number of parameters p is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless p/n → 0, a line of recent work has studied models with various types of low-dimensional structure, including sparse vectors, sparse and… (More)

- Martin J. Wainwright
- ArXiv
- 2006

The problem of consistently estimating the sparsity pattern of a vector β * ∈ R p based on observations contaminated by noise arises in various contexts, including subset selection in regression, structure estimation in graphical models, sparse approximation, and signal denoising. We analyze the behavior of ℓ 1-constrained quadratic programming (QP), also… (More)

- Martin J. Wainwright, Tommi S. Jaakkola, Alan S. Willsky
- IEEE Transactions on Information Theory
- 2002

We introduce a new class of upper bounds on the log partition function of a Markov random field (MRF). This quantity plays an important role in various contexts, including approximating marginal distributions, parameter estimation, combinatorial enumeration, statistical decision theory, and large-deviations bounds. Our derivation is based on concepts from… (More)

—The problem of consistently estimating the sparsity pattern of a vector 3 2 p based on observations contaminated by noise arises in various contexts, including signal denoising, sparse approximation, compressed sensing, and model selection. We analyze the behavior of`1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering… (More)

We consider the problem of estimating the graph associated with a binary Ising Markov random field. We describe a method based on ℓ1-regularized logistic regression, in which the neighborhood of any given node is estimated by performing logistic regression subject to an ℓ1-constraint. The method is analyzed under high-dimensional scaling, in which both the… (More)