#### Filter Results:

- Full text PDF available (10)

#### Publication Year

2005

2017

- This year (0)
- Last 5 years (7)
- Last 10 years (10)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

Learn More

- GARETH, M.., James, Peter Radchenko
- 2011

We suggest a new method, called Functional Additive Regression, or FAR, for efficiently performing high-dimensional functional regression. FAR extends the usual linear regression model involving aâ€¦ (More)

- Peter Radchenko
- J. Multivariate Analysis
- 2015

This paper addresses the problem of fitting nonlinear regression models in high-dimensional situations, where the number of predictors, p, is large relative to the number of observations, n. Most ofâ€¦ (More)

Both classical Forward Selection and the more modern Lasso provide computationally feasible methods for performing variable selection in high dimensional regression problems involving manyâ€¦ (More)

- Peter Radchenko
- 2007

A general method is presented for deriving the limiting behavior of estimators that are defined as the values of parameters optimizing an empirical criterion function. The asymptotic behavior of suchâ€¦ (More)

The paper uses empirical process techniques to study the asymptotics of the least-squares estimator (LSE) for the fitting of a nonlinear regression function. By combining and extending ideas of Wuâ€¦ (More)

The Discrete Dantzig Selector: Estimating Sparse Linear Models via Mixed Integer Linear Optimization

- Rahul Mazumder, Peter Radchenko
- IEEE Transactions on Information Theory
- 2017

We propose a novel high-dimensional linear regression estimator: the <italic>Discrete Dantzig Selector</italic>, which minimizes the number of nonzero regression coefficients subject to a budget onâ€¦ (More)

- Peter Radchenko
- 2006

This paper investigates how changing the growth rate of the sequence of penalty weights affects the asymptotics of Lasso-type estimators. The cases of non-singular and nearly singular design areâ€¦ (More)

1. Proofs of Theorems 1 and 2. Let Î· = (Î· T 1 , Â· Â· Â· , Î· T p) T be a (pq n)-vector and Î˜ = (Î˜ 1 , Â· Â· Â· , Î˜ p) be an n Ã— (pq n) matrix. With matrix notation, the linear FAR criterion minimizes theâ€¦ (More)

If dcj < dj for at least one j, we can choose a positive constant c, such that logL(dc) âˆ’ logL(d) > c with probability tending to one, due to the lack of fit. It follows that Gn(dc) > 0 withâ€¦ (More)

The regression problem involving functional predictors has many important applications and a number of functional regression methods have been developed. However, a common complication in functionalâ€¦ (More)