# Reconciled Polynomial Machine: A Unified Representation of Shallow and Deep Learning Models

@article{Zhang2018ReconciledPM, title={Reconciled Polynomial Machine: A Unified Representation of Shallow and Deep Learning Models}, author={Jiawei Zhang and Limeng Cui and Fisher B. Gouza}, journal={ArXiv}, year={2018}, volume={abs/1805.07507} }

In this paper, we aim at introducing a new machine learning model, namely reconciled polynomial machine, which can provide a unified representation of existing shallow and deep machine learning models. Reconciled polynomial machine predicts the output by computing the inner product of the feature kernel function and variable reconciling function. Analysis of several concrete models, including Linear Models, FM, MVM, Perceptron, MLP and Deep Neural Networks, will be provided in this paper, which…

## Figures from this paper

## References

SHOWING 1-10 OF 21 REFERENCES

Why Does Deep and Cheap Learning Work So Well?

- Computer ScienceArXiv
- 2016

It is argued that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine learning, a deep neural network can be more efficient than a shallow one.

When and Why Are Deep Networks Better Than Shallow Ones?

- Computer ScienceAAAI
- 2017

This theorem proves an old conjecture by Bengio on the role of depth in networks, characterizing precisely the conditions under which it holds, and suggests possible answers to the the puzzle of why high-dimensional deep networks trained on large training sets often do not seem to show overfit.

Greedy Layer-Wise Training of Deep Networks

- Computer Science
- 2007

These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.

The power of deeper networks for expressing natural functions

- Computer ScienceICLR
- 2018

It is proved that the total number of neurons required to approximate natural classes of multivariate polynomials of multivariable variables grows only linearly with $n$ for deep neural networks, but grows exponentially when merely a single hidden layer is allowed.

Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion

- Computer ScienceJ. Mach. Learn. Res.
- 2010

This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.

A Fast Learning Algorithm for Deep Belief Nets

- Computer ScienceNeural Computation
- 2006

A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.

ImageNet classification with deep convolutional neural networks

- Computer ScienceCommun. ACM
- 2012

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

Neural Networks for Optimal Approximation of Smooth and Analytic Functions

- Mathematics, Computer ScienceNeural Computation
- 1996

We prove that neural networks with a single hidden layer are capable of providing an optimal order of approximation for functions assumed to possess a given number of derivatives, if the activation…

Support-Vector Networks

- Computer ScienceMachine Learning
- 2004

High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

Higher rank Support Tensor Machines for visual recognition

- Computer SciencePattern Recognit.
- 2012