#### Filter Results:

#### Publication Year

2008

2015

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

Principal component analysis (PCA) is a classical method for dimensionality reduction based on extracting the dominant eigenvectors of the sample covariance matrix. However, PCA is well known to behave poorly in the " large p, small n " setting, in which the problem dimension p is comparable to or larger than the sample size n. This paper studies PCA in… (More)

We consider the problem of community detection in a network, that is, partitioning the nodes into groups that, in some sense, reveal the structure of the network. Many algorithms have been proposed for fitting network models with communities, but most of them do not scale well to large networks, and often fail on sparse networks. We present a fast… (More)

The stochastic block model (SBM) is a popular tool for community detection in networks, but fitting it by maximum likelihood (MLE) involves an infeasible optimization problem. We propose a new semi-definite programming (SDP) solution to the problem of fitting the SBM, derived as a relaxation of the MLE. Our relaxation is tighter than other recently proposed… (More)

Principal component analysis (PCA) is a classical method for dimension-ality reduction based on extracting the dominant eigenvectors of the sample covariance matrix. However, PCA is well known to behave poorly in the " large p, small n " setting, in which the problem dimension p is comparable to or larger than the sample size n. This paper studies PCA in… (More)

—We propose a probabilistic formulation that enables sequential detection of multiple change points in a network setting. We present a class of sequential detection rules for func-tionals of change points, and prove their asymptotic optimality properties in terms of expected detection delay time. Drawing from graphical model formalism, the sequential… (More)

We consider the sampling problem for functional PCA (fPCA), where the simplest example is the case of taking time samples of the underlying functional components. More generally, we model the sampling operation as a continuous linear map from H to R m , where the functional components to lie in some Hilbert subspace H of L 2 , such as a reproducing kernel… (More)

We consider a class of operator-induced norms, acting as finite-dimensional surrogates to the L 2 norm, and study their approximation properties over Hilbert subspaces of L 2. The class includes, as a special case, the usual empirical norm encountered, for example, in the context of nonparametric regression in a reproducing kernel Hilbert space (RKHS). Our… (More)