• Publications
  • Influence
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
TLDR
The TensorFlow interface and an implementation of that interface that is built at Google are described, which has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields. Expand
Deep Learning with Differential Privacy
TLDR
This work develops new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrates that deep neural networks can be trained with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality. Expand
Mechanism Design via Differential Privacy
TLDR
It is shown that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. Expand
A tight bound on approximating arbitrary metrics by tree metrics
In this paper, we show that any n point metric space can be embedded into a distribution over dominating tree metrics such that the expected stretch of any edge is O(log n). This improves upon theExpand
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
TLDR
Private Aggregation of Teacher Ensembles (PATE) is demonstrated, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users, which achieves state-of-the-art privacy/utility trade-offs on MNIST and SVHN. Expand
Learning Differentially Private Recurrent Language Models
TLDR
This work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent and adds user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user- level data. Expand
Quincy: fair scheduling for distributed computing clusters
TLDR
It is argued that data-intensive computation benefits from a fine-grain resource sharing model that differs from the coarser semi-static resource allocations implemented by most existing cluster computing architectures. Expand
The complexity of pure Nash equilibria
TLDR
This work focuses on congestion games, and shows that a pure Nash equilibrium can be computed in polynomial time in the symmetric network case, while the problem is PLS-complete in general. Expand
Adversarially Robust Generalization Requires More Data
TLDR
It is shown that already in a simple natural data model, the sample complexity of robust learning can be significantly larger than that of "standard" learning. Expand
Privacy, accuracy, and consistency too: a holistic solution to contingency table release
TLDR
This work proposes a solution that provides strong guarantees for all three desiderata simultaneously, privacy, accuracy, and consistency among the tables, and applies equally well to the logical cousin of the contingency table, the OLAP cube. Expand
...
1
2
3
4
5
...