Corpus ID: 80628356

Predicting time-varying distributions with limited training data

@article{Kou2018PredictingTD,
  title={Predicting time-varying distributions with limited training data},
  author={C. Kou and H. Lee and Teck Khim Ng and Jorge Sanz},
  journal={arXiv: Learning},
  year={2018}
}
In the task of distribution-to-distribution regression, a recently proposed model, distribution regression network (DRN) (Kou et al., 2018) has shown superior performance compared to conventional neural networks while using much fewer parameters. The key novelty of DRN is that it encodes an entire distribution in each network node, and this compact representation allows DRN to achieve better accuracies than conventional neural networks. However, the experiments in Kou et al. (2018) focused… Expand

References

SHOWING 1-10 OF 36 REFERENCES
Predicting the future behavior of a time-varying probability distribution
  • Christoph H. Lampert
  • Mathematics, Computer Science
  • 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2015
TLDR
A method for predicting the next step of the time-varying distribution from a given sequence of sample sets from earlier time steps, which relies on two recent machine learning techniques: embedding probability distributions into a reproducing kernel Hilbert space, and learning operators by vector-valued regression. Expand
A compact network learning model for distribution regression
TLDR
This work designs a compact network representation that encodes and propagates functions in single nodes for the distribution regression task, and achieves higher prediction accuracies while using fewer parameters than traditional neural networks. Expand
Fast Function to Function Regression
TLDR
The 3BE is the first nonparametric FFR estimator that can scale to massive data-sets and shows an improvement of several orders of magnitude in terms of prediction speed and a reduction in error over previous estimators in various real-world datasets. Expand
Fast Distribution To Real Regression
TLDR
The Double-Basis estimator is proposed, which looks to alleviate the problem of distribution to real-value regression where a large amount of data may be necessary for a low estimation risk, but the computation cost of estimation becomes infeasible when the data-set is too large. Expand
Who Supported Obama in 2012?: Ecological Inference through Distribution Regression
TLDR
The novel approach to distribution regression exploits the connection between Gaussian process regression and kernel ridge regression, giving a coherent, Bayesian approach to learning and inference and a convenient way to include prior information in the form of a spatial covariance function. Expand
Distribution-Free Distribution Regression
TLDR
This paper develops theory and methods for distribution-free versions of distribution regression and proves that when the eective dimension is small enough (as measured by the doubling dimension), then the excess prediction risk converges to zero with a polynomial rate. Expand
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
TLDR
DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms. Expand
Distribution to Distribution Regression
TLDR
An estimator is developed and an upper bound for the L2 risk is derived and it is shown that when the effective dimension is small enough (as measured by the doubling dimension), then the risk converges to zero with a polynomial rate. Expand
NONPARAMETRIC DENSITY ESTIMATION IN HIGH-DIMENSIONS
Penalized likelihood density estimation provides an effective approach to the nonparametric fitting of graphical models, with conditional independence struc- tures characterized via selective termExpand
Designing a neural network for forecasting financial and economic time series
TLDR
An eight-step procedure to design a neural network forecasting model is explained including a discussion of tradeoffs in parameter selection, some common pitfalls, and points of disagreement among practitioners. Expand
...
1
2
3
4
...