From Neural Networks to Deep Learning: zeroing in on the human brain

@article{Laserson2011FromNN,
  title={From Neural Networks to Deep Learning: zeroing in on the human brain},
  author={Jonathan Laserson},
  journal={XRDS},
  year={2011},
  volume={18},
  pages={29-34}
}
Pondering the brain with the help of machine learning expert Andrew Ng and researcher-turned-author-turned-entrepreneur Jeff Hawkins. 

Figures from this paper

Mind modeling using Transparent Intensional Logic

  • A. Gardon
  • Philosophy, Computer Science
    RASLAN
  • 2011
TLDR
The boundary between processes that can be modeled using AI and those that are purely characteristic for living beings is sketched and the Theory of three layers (TTL) is introduced.

Going Deeper with CKELM

  • Yang FangWenxin Hu
  • Computer Science
    2017 International Conference on Network and Information Systems for Computers (ICNISC)
  • 2017
TLDR
The improved CKELM includes rank based pooling strategy and GPU training and result proves that the pooling methods perform better than traditional ones.

DISSECTOR: Input Validation for Deep Learning Applications by Crossing-layer Dissection

TLDR
Dissector is a fault tolerance approach to distinguishing those inputs that represent unexpected conditions (beyond-inputs) from normal inputs that are still within the models' handling capabilities (within- inputs), thus keeping the applications still function with expected reliabilities.

Overview of Deep Kernel Learning Based Techniques and Applications

TLDR
An overview of the research progress of deepkernel learning and its applications is presented and the basic theories and their fusion which form several deep kernel learning structures to enhance algorithm properties and performance in practice are introduced.

AI image recognizing agent through a scalable neural network

TLDR
The proposed image recognizing agent accomplishes its task though the application of a scalable neural network that uses the concept of both assisted supervised learning and reinforced learning, and exploits the asynchronous and parallel computing ability of artificial neural networks.

Research on deep neural network's hidden layers in phoneme recognition

TLDR
Investigating the functions of DNN's hidden layers in representing speech articulations finds that the different layers seem to be responsible for different phoneme groups according to the place of articulation.

Factored four way conditional restricted Boltzmann machines for activity recognition

CNN with Limit Order Book Data for Stock Price Prediction

TLDR
This work presents a remarkable and innovative short-term forecasting method for Financial Time Series using the Limit Order Book data, which registers all trade intentions from market participants, and using Deep Convolutional Neural Networks (CNN), which is good at pattern recognition on images.

References

SHOWING 1-7 OF 7 REFERENCES

Brain and Visual Perception: The Story of a 25-Year Collaboration

PART I: INTRODUCTION AND BIOGRAPHIES PART II: BACKGROUND TO OUR RESEARCH PART III: NORMAL PHYSIOLOGY AND ANATOMY PART IV: DEPRIVATION AND DEVELOPMENT PART V: THREE REVIEWS

Unsupervised feature learning for audio classification using convolutional deep belief networks

In recent years, deep learning approaches have gained significant interest as a way of building hierarchical representations from unlabeled data. However, to our knowledge, these deep learning

Emergence of simple-cell receptive field properties by learning a sparse code for natural images

TLDR
It is shown that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex.

Learning The Discriminative Power-Invariance Trade-Off

TLDR
This paper investigates the problem of learning optimal descriptors for a given classification task using the kernel learning framework and learns the optimal, domain-specific kernel as a combination of base kernels corresponding to base features which achieve different levels of trade-off.

Visual projections routed to the auditory pathway in ferrets: receptive fields of visual neurons in primary auditory cortex

TLDR
Like cells in normal V1, A1 cells in rewired animals exhibited orientation and direction selectivity and had simple and complex receptive field organizations and the degree of orientation and directional selectivity as well as the proportions of simple, complex, and nonoriented cells found in A1 and V1 were very similar.

Computational Vision at Caltech

  • Computational Vision at Caltech

Sparse Coding and Deep Learning tutorial http://ufldl.stanford.edu/wiki/ index.php/UFLDL_Tutorial Numenta Current Model http:// www.numenta.com/htm- overview/education/HTM_ CorticalLearningAlgorithms

  • Sparse Coding and Deep Learning tutorial http://ufldl.stanford.edu/wiki/ index.php/UFLDL_Tutorial Numenta Current Model http:// www.numenta.com/htm- overview/education/HTM_ CorticalLearningAlgorithms