• Publications
  • Influence
Deep Neural Networks Learn Non-Smooth Functions Effectively
TLDR
It is shown that the estimators by DNNs are almost optimal to estimate the non-smooth functions, while some of the popular models do not attain the optimal rate.
Adaptive Approximation and Estimation of Deep Neural Network to Intrinsic Dimensionality
TLDR
It is theoretically proved that the generalization performance of deep neural networks (DNNs) is mainly determined by an intrinsic low-dimensional structure of data, and DNNs outperform other non-parametric estimators which are also adaptive to the intrinsic dimension.
Adaptive Approximation and Generalization of Deep Neural Network with Intrinsic Dimensionality
TLDR
This study derives bounds for an approximation error and a generalization error regarding DNNs with intrinsically low dimensional covariates and proves that an intrinsic low dimensionality of covariates is the main factor that determines the performance of deep neural networks.
Finite Sample Analysis of Minimax Offline Reinforcement Learning: Completeness, Fast Rates and First-Order Efficiency
TLDR
Novel alternative completeness conditions under which OPE is feasible are introduced and the first finite-sample result with first-order efficiency in non-tabular environments is presented, i.e., having the minimal coefficient in the leading term.
On Tensor Train Rank Minimization : Statistical Efficiency and Scalable Algorithm
TLDR
This paper introduces a convex relaxation of the TT decomposition problem and derives its error bound for the tensor completion task, and develops an alternating optimization method with a randomization technique, in which the time complexity is as efficient as the space complexity is.
Doubly Decomposing Nonparametric Tensor Regression
TLDR
This work develops a Bayesian estimator with the Gaussian process prior to estimate local functions in a high-dimensional tensor space broken into simple local functions by incorporating low-rank tensor decomposition.
Maximum Moment Restriction for Instrumental Variable Regression
TLDR
A simple framework for nonlinear instrumental variable regression based on a kernelized conditional moment restriction known as a maximum moment restriction (MMR) that results in easy-to-use algorithms with a justified hyper-parameter selection procedure is proposed.
Tensor Decomposition with Smoothness
TLDR
This work theoretically shows that, under the smoothness assumption, smoothed Tucker decomposition achieves a better error bound and the theoretical result and performances of STD are numerically verified.
Improved Generalization Bound of Group Invariant / Equivariant Deep Networks via Quotient Feature Space
TLDR
This paper is the first study to provide a general and tight generalization bound for a broad class of group invariant and equivariant deep neural networks.
...
...