• Publications
  • Influence
Escaping From Saddle Points - Online Stochastic Gradient for Tensor Decomposition
TLDR
We analyze stochastic gradient descent for optimizing non-convex functions with exponentially many local minima and saddle points. Expand
  • 655
  • 94
  • PDF
Combinatorial Multi-Armed Bandit: General Framework and Applications
  • 225
  • 51
Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives
Many classical algorithms are found until several years later to outlive the confines in which they were conceived, and continue to be relevant in unforeseen settings. In this paper, we show thatExpand
  • 133
  • 26
  • PDF
Combinatorial multi-armed bandit: general framework, results and applications
TLDR
We define a general framework for a large class of combinatorial multi-armed bandit (CMAB) problems, where simple arms with unknown distributions form super arms. Expand
  • 100
  • 21
  • PDF
Convergence Analysis of Two-layer Neural Networks with ReLU Activation
TLDR
In recent years, stochastic gradient descent (SGD) based techniques has become the standard tools for training neural networks. Expand
  • 313
  • 16
  • PDF
Multiple Imputation Using SAS Software
  • Y. Yuan
  • Computer Science
  • 12 December 2011
TLDR
Multiple imputation provides a useful strategy for dealing with data sets that have missing values. Expand
  • 222
  • 14
An empirical study on evaluation metrics of generative adversarial networks
TLDR
We comprehensively examine the existing literature on sample-based quantitative evaluation of GANs and identify their strengths and limitations in practical settings. Expand
  • 80
  • 11
  • PDF
Great majority of recombination events in Arabidopsis are gene conversion events
The evolutionary importance of meiosis may not solely be associated with allelic shuffling caused by crossing-over but also have to do with its more immediate effects such as gene conversion.Expand
  • 65
  • 11
  • PDF
An Alternative View: When Does SGD Escape Local Minima?
TLDR
We show that, even if the function $f$ has many bad local minima or saddle points, as long as for every point $x$, the weighted average of the gradients of its neighborhoods is one point convex with respect to the desired solution $x^*$, SGD will get close to, and then stay around $x*$ with constant probability. Expand
  • 103
  • 11
  • PDF
Culturing pyramidal neurons from the early postnatal mouse hippocampus and cortex
The ability to culture and maintain postnatal mouse hippocampal and cortical neurons is highly advantageous, particularly for studies on genetically engineered mouse models. Here we present aExpand
  • 366
  • 10