• Corpus ID: 17758235

Iterative Construction of Sparse Polynomial Approximations

@inproceedings{Sanger1991IterativeCO,
  title={Iterative Construction of Sparse Polynomial Approximations},
  author={Terence D. Sanger and Richard S. Sutton and Christopher J. Matheus},
  booktitle={NIPS},
  year={1991}
}
We present an iterative algorithm for nonlinear regression based on construction of sparse polynomials. Polynomials are built sequentially from lower to higher order. Selection of new terms is accomplished using a novel look-ahead approach that predicts whether a variable contributes to the remaining error. The algorithm is based on the tree-growing heuristic in LMS Trees which we have extended to approximation of arbitrary polynomials of the input features. In addition, we provide a new… 

Figures from this paper

Ridge polynomial networks

A constructive learning algorithm developed for the network is shown to yield smooth generalization and steady learning and the RPN provides a natural mechanism for incremental network growth.

Scalable Non-linear Learning with Adaptive Polynomial Expansions

This work describes a new algorithm that explicitly and adaptively expands higher-order interaction features over base linear representations and shows that its computation/prediction tradeoff ability compares very favorably against strong baselines.

Multiresolution neural networks

A mathematically sound framework for neural networks simulations in the form of multiresolution analysis is presented and the combination of MRNNs and the RFP algorithm provides a solution to the problems associated with the backpropagation networks.

Tensor machines for learning target-specific polynomial features

This work considers the problem of learning a small number of explicit polynomial features and finds a parsimonious set of features by optimizing over the hypothesis class introduced by Kar and Karnick for random feature maps in a target-specific manner, named Tensor Machines.

Receptive Field Weighted Regression

A constructive, incremental learning system for regression problems that models data by means of spatially localized linear models that illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields.

Using feature transformation and selection with polynomial networks

This work describes a novel method which allows quick reduction of the dimension using an FFT and shows how random dimension reduction can be used to effectively control model complexity.

Wavelet neural networks and receptive field partitioning

The use of wavelet functions as basis functions is proposed, which constructs a sparse wavelet network by including and positioning wavelets from increasing levels of resolution to maximize the classification score.

Constructive Induction of Cartesian Product Attributes

This work describes the construction of new attributes that are the Cartesian product of existing attributes and considers the effects of this operator on three learning algorithms and compares two different methods for determining when to construct new attributes with this operator.

Developing higher-order networks with empirically selected units

A class of simple polynomial neural network classifiers, called mask perceptrons, are introduced, which relies on ordering of input attributes with respect to their potential usefulness and heuristic driven generation and selection of hidden units in order to combat the exponential explosion in the number of higher-order monomial terms to choose from.

Evolution of functional link networks

This paper relies on the global search capabilities of a genetic algorithm to scan the space of subsets of polynomial units and finds that surprisingly simple FLN compare favorably with other more complex architectures derived by means of constructive and evolutionary algorithms on some UCI benchmark data sets.

References

SHOWING 1-10 OF 11 REFERENCES

Learning Polynomial Functions by Feature Construction

A tree-structured adaptive network for function approximation in high-dimensional spaces

  • T. Sanger
  • Computer Science
    IEEE Trans. Neural Networks
  • 1991
The author proposes a technique based on the idea that for most of the data, only a few dimensions of the input may be necessary to compute the desired output function, and it can be used to reduce the number of required measurements in situations where there is a cost associated with sensing.

Fast Parallel Algorithms for Sparse Multivariate Polynomial Interpolation over Finite Fields

This algorithm yields the first efficient deterministic polynomial time algorithm (and moreover boolean $NC-algorithm) for interpolating t-sparse polynomials over finite fields and should be contrasted with the fact that efficient interpolation using a black box that only evaluates the polynometric at points in $GF[q]$ is not possible.

Basis-Function Trees as a Generalization of Local Variable Selection Methods

A tree-structured network is presented which is a generalization of local variable selection and other techniques used in several statistical methods, including CART, ID3, C4, MARS, and others.

Polynomial Theory of Complex Systems

The approach taken in this paper to approximating the decision hypersurface, and hence the input-output relationship of a complex system, is to fit a high-degree multinomial to the input properties using a multilayered perceptronlike network structure.

Sequential GMDH Algorithm and Its Application to River Flow Prediction

Numerical comparisons are performed between the prediction model by "sequential GMDH" and by the elaborate hydrologic methods, and it is shown that there are improvements in the newly introduced prediction algorithm for real-time computation.

A universal nonlinear filter, predictor and simulator which optimizes itself by a learning process

A machine is described consisting of a universal non-linear filter, which is a highly adaptable analogue computer, together with a training device, which incorporates 80 analogue multipliers of a novel ‘piezomagnetic’ type which, in its present form, can perform over 1000 multiplications per second with an error of 0.5% or less.

Record, Part

  • Record, Part

Adaptive learning networks: Development and application in the United States of algorithms related to GMDH

  • Self-Organizing Methods in Modeling
  • 1984

Fast parallel algorithms for sparse polynomial interpolation over finite fields

  • SIAM J. Computing,
  • 1990