Randomness in neural networks: an overview

@article{Scardapane2017RandomnessIN,
  title={Randomness in neural networks: an overview},
  author={Simone Scardapane and Dianhui Wang},
  journal={Wiley Interdiscip. Rev. Data Min. Knowl. Discov.},
  year={2017},
  volume={7}
}
Neural networks, as powerful tools for data mining and knowledge engineering, can learn from data to build feature-based classifiers and nonlinear predictive models. Training neural networks involves the optimization of nonconvex objective functions, and usually, the learning process is costly and infeasible for applications associated with data streams. A possible, albeit counterintuitive, alternative is to randomly assign a subset of the networks’ weights so that the resulting optimization… CONTINUE READING

Citations

Publications citing this paper.
Showing 1-10 of 33 extracted citations

Blessing of dimensionality: mathematical foundations of the statistical physics of data

Philosophical transactions. Series A, Mathematical, physical, and engineering sciences • 2018

Deep Stochastic Configuration Networks with Universal Approximation Property

2018 International Joint Conference on Neural Networks (IJCNN) • 2018
View 1 Excerpt

References

Publications referenced by this paper.
Showing 1-10 of 106 references

Compact Random Feature Maps

View 4 Excerpts
Highly Influenced

Decoupled echo state networks with lateral inhibition

Neural Networks • 2007
View 4 Excerpts
Highly Influenced

Nonlinear System Modeling With Random Matrices: Echo State Networks Revisited

IEEE Transactions on Neural Networks and Learning Systems • 2012
View 2 Excerpts
Highly Influenced

Similar Papers

Loading similar papers…