Statistical Learning Theory of Quasi-Regular Cases

  title={Statistical Learning Theory of Quasi-Regular Cases},
  author={Koshi Yamada and Sumio Watanabe},
  journal={IEICE Trans. Fundam. Electron. Commun. Comput. Sci.},
Many learning machines such as normal mixtures and layered neural networks are not regular but singular statistical models, because the map from a parameter to a probability distribution is not one-to-one. The conventional statistical asymptotic theory can not be applied to such learning machines because the likelihood function can not be approximated by any normal distribution. Recently, new statistical theory has been established based on algebraic geometry and it was clarified that the… 
3 Citations

Tables from this paper

Information criterion for variational Bayes learning in regular and singular cases
  • Koshi Yamada, Sumio Watanabe
  • Computer Science, Mathematics
    The 6th International Conference on Soft Computing and Intelligent Systems, and The 13th International Symposium on Advanced Intelligence Systems
  • 2012
This paper proposes a new information criterion for variational Bayes learning, which is the unbiased estimator of the generalization loss for both cases when the posterior distribution is regular and singular.
Marginal likelihood and model selection for Gaussian latent tree and forest models
The real log-canonical thresholds (also known as stochastic complexities or learning coefficients) that quantify the large-sample behavior of the marginal likelihood in Bayesian inference are computed.


Equations of States in Singular Statistical Estimation
Algebraic Analysis for Nonidentifiable Learning Machines
It is rigorously proved that the Bayesian stochastic complexity or the free energy is asymptotically equal to 1 logn (m1 1) loglogn + constant, where n is the number of training samples and 1 and m1 are the rational number and the natural number, which are determined as the birational invariant values of the singularities in the parameter space.
Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory
The Bayes cross-validation loss is asymptotically equivalent to the widely applicable information criterion as a random variable and model selection and hyperparameter optimization using these two values are asymPTOTically equivalent.
On the Problem in Model Selection of Neural Network Regression in Overrealizable Scenario
The article analyzes the expected training error and the expected generalization error of neural networks and radial basis functions in overrealizable cases and clarifies the difference from regular models, for which identifiability holds.
On the Asymptotic Distribution of the Least-Squares Estimators in Unidentifiable Models
In order to analyze the stochastic property of multilayered perceptrons or other learning machines, we deal with simpler models and derive the asymptotic distribution of the least-squares estimators
B-functions and holonomic systems
A b-function of an analytic function f(x) is, by definition, a gcnerator of the ideal formed by the polynomials b(s) satisfying P(s, x, Dx) f (x)" + 1 = b(s) f(x) ~ for some differential operator
Resolution of Singularities and the Generalization Error with Bayesian Estimation for Layered Neural Network
a) E-mail: b) E-mail: 近年,ベイズ推測において重要な役割を果たす確率 的複雑さが,次のように特異点解消により構成された 方法論と数理的に緊密な関係をもつことが明らかに なった [1]~[3].サンプル数が nのときの学習モデルの 確率的複雑さ F (n) は,ある有理数 λと自然数