• Publications
  • Influence
Optimizing Spatial filters for Robust EEG Single-Trial Analysis
TLDR
The theoretical background of the common spatial pattern (CSP) algorithm, a popular method in brain-computer interface (BCD research), is elucidated and tricks of the trade for achieving a powerful CSP performance are revealed. Expand
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
TLDR
It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed. Expand
QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
TLDR
Quantized SGD is proposed, a family of compression schemes for gradient updates which provides convergence guarantees and leads to significant reductions in end-to-end training time, and can be extended to stochastic variance-reduced techniques. Expand
Norm-Based Capacity Control in Neural Networks
TLDR
The capacity, convexity and characterization of a general family of norm-constrained feed-forward networks is investigated and it is found that they are related to each other by virtue of their norm-consistency. Expand
In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning
TLDR
It is argued, partially through analogy to matrix factorization, that this is an inductive bias that can help shed light on deep learning. Expand
Estimation of low-rank tensors via convex optimization
In this paper, we propose three approaches for the estimation of the Tucker decomposition of multi-way arrays (tensors) from partial observations. All approaches are formulated as convex minimizationExpand
Multi-Level Variational Autoencoder: Learning Disentangled Representations from Grouped Observations
TLDR
The Multi-Level Variational Autoencoder (ML-VAE), a new deep probabilistic model for learning a disentangled representation of a set of grouped observations, separates the latent representation into semantically meaningful parts by working both at the group level and the observation level, while retaining efficient test-time inference. Expand
Tensor factorization using auxiliary information
TLDR
This paper proposes to use relationships among data as auxiliary information in addition to the low-rank assumption to improve the quality of tensor decomposition, and introduces two regularization approaches using graph Laplacians induced from the relationships. Expand
Convex Tensor Decomposition via Structured Schatten Norm Regularization
TLDR
It is shown theoretically that when the unknown true tensor is low-rank in a specific mode, this approach performs as good as knowing the mode with the smallest rank, and it is confirmed through numerical simulations that the theoretical prediction can precisely predict the scaling behavior of the mean squared error. Expand
Statistical Performance of Convex Tensor Decomposition
TLDR
Under some conditions that the mean squared error of the convex method scales linearly with the quantity the authors call the normalized rank of the true tensor, which naturally extends the analysis of convex low-rank matrix estimation to tensors. Expand
...
1
2
3
4
5
...