Corpus ID: 2603477

The observer-assisted method for adjusting hyper-parameters in deep learning algorithms

@article{Wielgosz2016TheOM,
  title={The observer-assisted method for adjusting hyper-parameters in deep learning algorithms},
  author={M. Wielgosz},
  journal={ArXiv},
  year={2016},
  volume={abs/1611.10328}
}
This paper presents a concept of a novel method for adjusting hyper-parameters in Deep Learning (DL) algorithms. An external agent-observer monitors a performance of a selected Deep Learning algorithm. The observer learns to model the DL algorithm using a series of random experiments. Consequently, it may be used for predicting a response of the DL algorithm in terms of a selected quality measurement to a set of hyper-parameters. This allows to construct an ensemble composed of a series of… Expand
1 Citations
The prototype of the HL-LHC magnets monitoring system based on Recurrent Neural Networks and adaptive quantization
TLDR
The proposed solution of the detection significantly outperformed OC-SVM-based detector in most of the cases, with much more stable performance across all the datasets. Expand

References

SHOWING 1-10 OF 12 REFERENCES
Practical Recommendations for Gradient-Based Training of Deep Architectures
  • Yoshua Bengio
  • Computer Science
  • Neural Networks: Tricks of the Trade
  • 2012
TLDR
Overall, this chapter describes elements of the practice used to successfully and efficiently train and debug large-scale and often deep multi-layer neural networks and closes with open questions about the training difficulties observed with deeper architectures. Expand
Optimizing deep learning hyper-parameters through an evolutionary algorithm
TLDR
Multi-node Evolutionary Neural Networks for Deep Learning is proposed as a method for automating network selection on computational clusters through hyper-parameter optimization performed via genetic algorithms. Expand
A Framework for Selecting Deep Learning Hyper-parameters
TLDR
This work provides a framework for building deep learning architectures via a stepwise approach, together with an evaluation methodology to quickly identify poorly performing architectural configurations, using a dataset with high dimensionality. Expand
Hyper-parameter optimization of deep convolutional networks for object recognition
  • S. Talathi
  • Computer Science
  • 2015 IEEE International Conference on Image Processing (ICIP)
  • 2015
TLDR
This work investigates SMBO to identify architecture hyper-parameters of deep convolution networks (DCNs) object recognition using a simple SMBO strategy that starts from a set of random initial DCN architectures to generate new architectures, which on training perform well on a given dataset. Expand
Practical Bayesian Optimization of Machine Learning Algorithms
TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms. Expand
Brief Introduction of Back Propagation (BP) Neural Network Algorithm and Its Improvement
TLDR
This paper focuses on the analysis of the characteristics and mathematical theory of BP neural network and also points out the shortcomings of BP algorithm as well as several methods for improvement. Expand
Visualizing and Understanding Convolutional Networks
TLDR
A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. Expand
Using Spatial Pooler of Hierarchical Temporal Memory for object classification in noisy video streams
TLDR
The results of the experiments reveal the superiority of the solution featuring Spatial Pooler over the one without it, and the system with SP performed better in the experiment without a noise component introduced and achieved a mean F1-score of 0.91 in ten trials. Expand
Random Search for Hyper-Parameter Optimization
TLDR
This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid, and shows that random search is a natural baseline against which to judge progress in the development of adaptive (sequential) hyper- parameter optimization algorithms. Expand
How do neurons operate on sparse distributed representations? A mathematical theory of sparsity, neurons and active dendrites
TLDR
The theory provides a mathematical framework for understanding the benefits and limits of sparse representations in cortical networks and shows that pattern recognition by a neuron with active dendrites can be extremely accurate and robust with high dimensional sparse inputs even when using a tiny number of synapses to recognize large patterns. Expand
...
1
2
...