• Publications
  • Influence
Learning Representations and Generative Models for 3D Point Clouds
TLDR
A deep AutoEncoder network with state-of-the-art reconstruction quality and generalization ability is introduced with results that outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations.
Manifold Mixup: Better Representations by Interpolating Hidden States
TLDR
Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations, improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood.
Representation Learning and Adversarial Generation of 3D Point Clouds
TLDR
This paper introduces a deep autoencoder network for point clouds, which outperforms the state of the art in 3D recognition tasks, and designs GAN architectures to generate novel point-clouds.
Negative Momentum for Improved Game Dynamics
TLDR
It is proved that alternating updates are more stable than simultaneous updates and both theoretically and empirically that alternating gradient updates with a negative momentum term achieves convergence in a difficult toy adversarial problem, but also on the notoriously difficult to train saturating GANs.
Asynchrony begets momentum, with an application to deep learning
TLDR
It is shown that running stochastic gradient descent in an asynchronous manner can be viewed as adding a momentum-like term to the SGD iteration, and an important implication is that tuning the momentum parameter is important when considering different levels of asynchrony.
Joint Power and Admission Control for Ad-Hoc and Cognitive Underlay Networks: Convex Approximation and Distributed Implementation
TLDR
A centralized approximate solution to power control in interference-limited cellular, ad-hoc, and cognitive underlay networks is developed, which alternates between distributed approximation and distributed deflation - reaching consensus on a user to drop, when needed.
Memory Limited, Streaming PCA
TLDR
An algorithm is presented that uses O(kp) memory and is able to compute the k-dimensional spike with O(p log p) sample-complexity - the first algorithm of its kind.
Accelerated Stochastic Power Iteration
TLDR
This work proposes a simple variant of the power iteration with an added momentum term, that achieves both the optimal sample and iteration complexity, and constructs stochastic PCA algorithms, for the online and offline setting, that achieve an accelerated iteration complexity O ( 1 / Δ ) .
YellowFin and the Art of Momentum Tuning
TLDR
This work revisits the momentum SGD algorithm and shows that hand-tuning a single learning rate and momentum makes it competitive with Adam, and designs YellowFin, an automatic tuner for momentum and learning rate in SGD.
Omnivore: An Optimizer for Multi-device Deep Learning on CPUs and GPUs
TLDR
The novel understanding of the interaction between system and optimization dynamics to provide an efficient hyperparameter optimizer is used, demonstrating that the most popular distributed deep learning systems fall within the tradeoff space, but do not optimize within the space.
...
1
2
3
4
5
...