#### Filter Results:

- Full text PDF available (17)

#### Publication Year

2011

2016

- This year (0)
- Last 5 years (14)
- Last 10 years (18)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Weijie Su, Stephen P. Boyd, Emmanuel J. Candès
- NIPS
- 2014

We derive a second-order ordinary differential equation (ODE), which is the limit<lb>of Nesterov’s accelerated gradient method. This ODE exhibits approximate equiv-<lb>alence to Nesterov’s scheme and thus can serve as a tool for analysis. We show that<lb>the continuous time ODE allows for a better understanding of Nesterov’s scheme.<lb>As a byproduct, we… (More)

- Małgorzata Bogdan, Ewout van den Berg, Chiara Sabatti, Weijie Su, Emmanuel J. Candès
- The annals of applied statistics
- 2015

We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ + z, where X has dimensions n × p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to [Formula: see text]where λ1 ≥ λ2 ≥ … ≥ λ p ≥ 0 and [Formula: see text] are the decreasing absolute values of the entries of b. This is… (More)

- Ma lgorzata Bogdan, Ewout van den Berg, Weijie Su, Emmanuel J. Candèsc
- 2013

We introduce a novel method for sparse regression and variable selection, which is inspired by modern ideas in multiple testing. Imagine we have observations from the linear model y = Xβ+ z, then we suggest estimating the regression coefficients by means of a new estimator called SLOPE, which is the solution to minimize b 1 2‖y −Xb‖ 2 `2 + λ1|b|(1) +… (More)

- Emmanuel J. Candès, Weijie Su
- ArXiv
- 2015

<lb>We consider high-dimensional sparse regression problems in which we observe y = Xβ + z,<lb>where X is an n × p design matrix and z is an n-dimensional vector of independent Gaussian<lb>errors, each with variance σ. Our focus is on the recently introduced SLOPE estimator [15],<lb>which regularizes the least-squares estimates with the rank-dependent… (More)

In this note we give a proof showing that even though the number of false discoveries and the total number of discoveries are not continuous functions of the parameters, the formulas we obtain for the false discovery proportion (FDP) and the power, namely, (B.3) and (B.4) in the paper Statistical Estimation and Testing via the Sorted `1 Norm are… (More)

- Małgorzata Bogdan, Ewout van den Berg, Weijie Su, Emmanuel J. Candès, Jan Długosz, Emmanuel J. Candèsc
- 2013

We introduce a novel method for sparse regression and variable selection, which is inspired by modern ideas in multiple testing. Imagine we have observations from the linear model y = Xβ+ z, then we suggest estimating the regression coefficients by means of a new estimator called the ordered lasso, which is the solution to minimize b 1 2‖y −Xb‖ 2 `2 +… (More)

As deploying Vehicular Ad Hoc NETworks (VANETs) costs large amounts of resources, it is crucial that governments and companies make a thorough estimation and comparison of the benefits and the costs. The network connectivity is an important factor we should take care of, because it can greatly affect the performance of VANETs and further affect how much we… (More)

- Lucas Janson, Weijie Su
- 2015

We present a novel method for controlling the k-familywise error rate (k-FWER) in the linear regression setting using the knockoffs framework first introduced by Barber and Candès. Our procedure, which we also refer to as knockoffs, can be applied with any design matrix with at least as many observations as variables, and does not require knowing the noise… (More)

- Cynthia Dwork, Weijie Su, Li Zhang
- ArXiv
- 2015

We provide the first differentially private algorithms for controlling the false discovery rate (FDR) in multiple hypothesis testing, with essentially no loss in power under certain conditions. Our general approach is to adapt a well-known variant of the Benjamini-Hochberg procedure (BHq), making each step differentially private. This destroys the classical… (More)

- Weijie Su, Malgorzata Bogdan, Emmanuel J. Candès
- ArXiv
- 2015

<lb>In regression settings where explanatory variables have very low correlations and where there<lb>are relatively few effects each of large magnitude, it is commonly believed that the Lasso shall be<lb>able to find the important variables with few errors—if any. In contrast, this paper shows that<lb>this is not the case even when the design variables are… (More)