Book Review: Deep Learning

@article{Kim2016BookRD,
  title={Book Review: Deep Learning},
  author={Kwang Gi Kim},
  journal={Healthcare Informatics Research},
  year={2016},
  volume={22},
  pages={351 - 354}
}
  • K. Kim
  • Published 1 October 2016
  • Computer Science
  • Healthcare Informatics Research
This book offers a solution to more intuitive problems in these areas. These solutions allow computers to learn from experience and understand the world in terms of a hierarchy of concepts, with each concept defined in terms of its relationship to simpler concepts. By gathering knowledge from experience, this approach avoids the need for human operators to specify formally all of the knowledge needed by the computer. The hierarchy of concepts allows the computer to learn complicated concepts by… 
Multidimensional Approach Based on Deep Learning to Improve the Prediction Performance of DNN Models
TLDR
This study presents the proposal approach and its algorithms, and shows how deep neural network was modeled in the training phase, and how PCA helps in the elimination of correlated information in the dataset to increase the classifier performance.
Image Compression and Classification Using Qubits and Quantum Deep Learning
TLDR
This work proposes a framework to classify larger, realistic images using quantum systems that is able to classify images that are larger than previously possible, up to 16× 16 for the MNIST dataset on a personal laptop, and obtains accuracy comparable to classical neural networks with the same number of learnable parameters.
Using Deep Learning to detect Facial Expression from front camera: Towards students’ interactions analyze
TLDR
The training experiment and the model development based on two alternatives proposed by IBM where the goal is to generate the most precise model are described and the choice of the accurate solution for deployment in the teaching and learning system is presented.
Logic Learning in Adaline Neural Network
TLDR
This paper uncovered the best logical rule that could be governed in ADNN with the lowest MSE value, which will be beneficial in various field of knowledge that requires immense data processing effort such as in engineering, healthcare, marketing, and business.
Landmark Classification Service Using Convolutional Neural Network and Kubernetes
  • Indra Prasetya Aji
  • Computer Science
    International Journal of Advanced Trends in Computer Science and Engineering
  • 2020
TLDR
This study uses the CNN method to classify image datasets that have landmark categories and shows that each model has an accuracy level with the highest value of 95% and the lowest value of 92%.
Effects of Image Size on Deep Learning
This paper presents the evaluation of the effects of image size on deep learning performance via semantic segmentation of magnetic resonance heart images with U-net for fully automated quantification
A hybrid ensemble deep learning approach for reliable breast cancer detection
TLDR
A comprehensive performance evaluation of the proposed approach, with diverse metrics, shows that employing the LSTM-based regression model improves accuracy and precision metrics of the fine-tuned Xception-based model by 10.65% and 11.6%, respectively.
...
...

References

SHOWING 1-9 OF 9 REFERENCES
Machine learning - a probabilistic perspective
  • K. Murphy
  • Computer Science
    Adaptive computation and machine learning series
  • 2012
TLDR
This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Learning representations by back-propagating errors
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Global Coordination of Local Linear Models
TLDR
The regularizer takes the form of a Kullback-Leibler divergence and illustrates an unexpected application of variational methods: not to perform approximate inference in intractable probabilistic models, but to learn more useful internal representations in tractable ones.
Pattern Recognition and Machine Learning
TLDR
This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.
The "wake-sleep" algorithm for unsupervised neural networks.
An unsupervised learning algorithm for a multilayer network of stochastic neurons is described. Bottom-up "recognition" connections convert the input into representations in successive hidden layers,
The EM algorithm for mixtures of factor analyzers
TLDR
This work presents an exact Expectation{Maximization algorithm for determining the parameters of this mixture of factor analyzers which concurrently performs clustering and dimensionality reduction, and can be thought of as a reduced dimension mixture of Gaussians.
Handwritten digit recognition: applications of neural network chips and automatic learning
TLDR
Two novel methods for achieving handwritten digit recognition are described, based on a neural network chip that performs line thinning and feature extraction using local template matching and on a digital signal processor that makes extensive use of constrained automatic learning.
Advice for applying machine learning [Internet]. Stanford (CA): Stanford University; 2015 [cited at 2016 Oct 22
  • 2016
Advice for applying machine learning