• Publications
  • Influence
Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds
TLDR
This work provides new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. Expand
Analyze gauss: optimal bounds for privacy-preserving principal component analysis
TLDR
It is shown that the well-known, but misnamed, randomized response algorithm provides nearly optimal additive quality gap compared to the best possible singular subspace of A, and that when ATA has a large eigenvalue gap -- a reason often cited for PCA -- the quality improves significantly. Expand
Practical Locally Private Heavy Hitters
TLDR
This work presents new practical local differentially private heavy hitters algorithms achieving optimal or near-optimal worst-case error and running time -- TreeHist and Bitstogram and implemented Algorithm TreeHist to verify the theoretical analysis. Expand
GUPT: privacy preserving data analysis made easy
TLDR
The design and evaluation of a new system, GUPT, that guarantees differential privacy to programs not developed with privacy in mind, makes no trust assumptions about the analysis program, and is secure to all known classes of side-channel attacks. Expand
Private Convex Empirical Risk Minimization and High-dimensional Regression
We consider differentially private algorithms for convex empirical risk minimization (ERM). Differential privacy (Dwork et al., 2006b) is a recently introduced notion of privacy which guarantees thatExpand
Discovering frequent patterns in sensitive data
TLDR
This paper shows how one can accurately discover and release the most significant patterns along with their frequencies in a data set containing sensitive information, while providing rigorous guarantees of privacy for the individuals whose information is stored there. Expand
Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity
TLDR
It is shown, via a new and general privacy amplification technique, that any permutation-invariant algorithm satisfying e-local differential privacy will satisfy [MATH HERE]-central differential privacy. Expand
Is Interaction Necessary for Distributed Private Learning?
TLDR
This work asks how much interaction is necessary to optimize convex functions in the local DP model, and provides new algorithms which are either noninteractive or use relatively few rounds of interaction. Expand
Differentially Private Online Learning
TLDR
This paper provides a general framework to convert the given algorithm into a privacy preserving OCP algorithm with good (sub-linear) regret, and shows that this framework can be used to provide differentially private algorithms for offline learning as well. Expand
Nearly Optimal Private LASSO
TLDR
This work presents a nearly optimal differentially private version of the well known LASSO estimator that achieves such a bound without the polynomial dependence on p under no additional assumptions on the design matrix. Expand
...
1
2
3
4
5
...