Some Theorems for Feed Forward Neural Networks

@article{Eswaran2015SomeTF,
  title={Some Theorems for Feed Forward Neural Networks},
  author={Kumar Eswaran and Vishwajeet Singh},
  journal={ArXiv},
  year={2015},
  volume={abs/1509.05177}
}
This paper introduces a new method which employs the concept of “Orientation Vectors” to train a feed forward neural network. It is shown that this method is suitable for problems where large dimensions are involved and the clusters are characteristically sparse. For such cases, the new method is not NP hard as the problem size increases. We ‘derive’ the present technique by starting from Kolmogrov’s method and then relax some of the stringent conditions. It is shown that for most… 
Learning Discriminative Features using Encoder-Decoder type Deep Neural Nets
TLDR
This paper presents a novel way of learning discriminative features by training Deep Neural Nets which have Encoder or Decoder type architecture similar to an Autoencoder, and demonstrates that this approach can learn discriminating features which can perform better at pattern classification tasks when the number of training samples is relatively small in size.
Calibration Method of Magnetometer Based on BP Neural Network
TLDR
The Levenberg Marquardt backpropagation training method is used to improve the training speed and prediction accuracy and realizes the on-orbit calibration of magnetometer through online training of the neural network, which reduces the influence of model error on calibration accuracy.
Data-Driven Models for Gas Turbine Online Diagnosis
TLDR
To compute fault parameters within GPA, this paper proposes to employ a nonlinear data-driven model and the theory of inverse problems, which will drastically simplify gas turbine diagnosis and chooses the best approximation technique of such a novel model.

References

SHOWING 1-10 OF 70 REFERENCES
An introduction to computing with neural nets
TLDR
This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification and exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components.
Learning representations by back-propagating errors
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Support-Vector Networks
TLDR
High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
A Mirroring Theorem and its Application to a New Method of Unsupervised Hierarchical Pattern Classification
TLDR
The Mirroring Theorem provides a proof that this technique will always work provided sufficient information is contained in the ensemble of samples for it to be classified and sub-classified and certain continuity conditions for the mappings are satisfied.
Learning Deep Architectures for AI
TLDR
The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed.
A new Hierarchical Pattern Recognition method using Mirroring Neural Networks
TLDR
A hierarchical classifier consisting of an organized set of "blocks" each of which is actually a module that performs a feature extraction and an associated classification that will be very useful in the development of efficient and powerful self-learning machines in the future.
A Numerical Implementation of Kolmogorov's Superpositions
  • D. Sprecher
  • Computer Science, Mathematics
    Neural Networks
  • 1996
On computational algorithms for real-valued continuous functions of several variables
A non iterative method of separation of points by planes in n dimensions and its application
TLDR
Given a set of N points, an algorithm is discovered that can separate these points from one another by n-dimensional planes and it strictly follows Shannon's principle of making optimal use of information as it advances stage by stage.
...
...