#### Filter Results:

#### Publication Year

2006

2017

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

- Huan Xu, Constantine Caramanis, Sujay Sanghavi
- IEEE Transactions on Information Theory
- 2010

Singular-value decomposition (SVD) [and principal component analysis (PCA)] is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented sensitivity to outliers. Recent work has considered the setting where each point has a few arbitrarily corrupted… (More)

- Huan Xu, Constantine Caramanis, Shie Mannor
- Journal of Machine Learning Research
- 2009

We consider regularized support vector machines (SVMs) and show that they are precisely equivalent to a new robust optimization formulation. We show that this equivalence of robust optimization and regularization has implications for both algorithms, and analysis. In terms of algorithms, the equivalence suggests more general SVM-like algorithms for… (More)

- Jiashi Feng, Huan Xu, Shuicheng Yan
- NIPS
- 2013

Robust PCA methods are typically based on batch optimization and have to load all the samples into memory during optimization. This prevents them from efficiently processing big data. In this paper, we develop an Online Robust PCA (OR-PCA) that processes one sample per time instance and hence its memory cost is independent of the number of samples,… (More)

- Huan Xu, Shie Mannor
- Machine Learning
- 2010

We derive generalization bounds for learning algorithms based on their robustness: the property that if a testing sample is “similar” to a training sample, then the testing error is close to the training error. This provides a novel approach, different from complexity or stability arguments, to study generalization of learning algorithms. One advantage of… (More)

- Huan Xu, Shie Mannor
- CDC
- 2009

— We consider decision making in a Markovian setup where the reward parameters are not known in advance. Our performance criterion is the gap between the performance of the best strategy that is chosen after the true parameter realization is revealed and the performance of the strategy that is chosen before the parameter realization is revealed. We call… (More)

- Ali Jalali, Yudong Chen, Sujay Sanghavi, Huan Xu
- ICML
- 2011

This paper considers the problem of clustering a partially observed unweighted graph—i.e., one where for some node pairs we know there is an edge between them, for some others we know there is no edge, and for the remaining we do not know whether or not there is an edge. We want to organize the nodes into disjoint clusters so that there is relatively dense… (More)

- Yudong Chen, Sujay Sanghavi, Huan Xu
- NIPS
- 2012

We develop a new algorithm to cluster sparse unweighted graphs – i.e. partition the nodes into disjoint clusters so that there is higher density within clusters, and low across clusters. By sparsity we mean the setting where both the in-cluster and across cluster edge densities are very small, possibly vanishing in the size of the graph. Sparsity makes the… (More)

- K. Mani Chandy, Steven H. Low, Ufuk Topcu, Huan Xu
- CDC
- 2010

— The integration of renewable energy generation, such as wind power, into the electric grid is difficult because of the source intermittency and the large distance between generation sites and users. This difficulty can be overcome through a transmission network with large-scale storage that not only transports power, but also mitigates against… (More)

- Huan Xu, Constantine Caramanis, Shie Mannor
- IEEE Transactions on Information Theory
- 2008

Lasso, or <i>l</i><sup>1</sup> regularized least squares, has been explored extensively for its remarkable sparsity properties. In this paper it is shown that the solution to Lasso, in addition to its sparsity, has robustness properties: it is the solution to a robust optimization problem. This has two important consequences. First, robustness provides a… (More)

- Guangcan Liu, Huan Xu, Shuicheng Yan
- AISTATS
- 2012

In this work, we address the following matrix recovery problem: suppose we are given a set of data points containing two parts, one part consists of samples drawn from a union of multiple subspaces and the other part consists of outliers. We do not know which data points are outliers, or how many outliers there are. The rank and number of the subspaces are… (More)