Tensor Deep Stacking Networks

@article{Hutchinson2013TensorDS,
  title={Tensor Deep Stacking Networks},
  author={Brian Hutchinson and Li Deng and Dong Yu},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2013},
  volume={35},
  pages={1944-1957}
}
A novel deep architecture, the tensor deep stacking network (T-DSN), is presented. The T-DSN consists of multiple, stacked blocks, where each block contains a bilinear mapping from two hidden layers to the output layer, using a weight tensor to incorporate higher order statistics of the hidden binary (([0,1])) features. A learning algorithm for the T-DSN's weight matrices and tensors is developed and described in which the main parameter estimation burden is shifted to a convex subproblem with… 

Gated Recurrent Neural Tensor Network

TLDR
This paper proposes a novel RNN architecture that combine the concepts of gating mechanism and the tensor product into a single model and reveals that the proposed models significantly improved their performance compared to the baseline models.

The Tensor Deep Stacking Network Toolkit

TLDR
This paper summarizes the core functionality of the T-DSN toolkit and discusses its design and implementation, and presents a new set of experiments on standard machine learning datasets, demonstrating the model's effectiveness.

Tensor classification network

  • Yi-Ting BaoJen-Tzung Chien
  • Computer Science
    2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP)
  • 2015
TLDR
A new tensor classification network (TCN) is presented which combines the tensor factorization and the NN classification for multi-way feature extraction and classification and attains comparable classification performance but with very few parameters compared with NN classifier.

Stacking-Based Deep Neural Network: Deep Analytic Network for Pattern Classification

TLDR
DAN/K-DAN outperform the present S-DNNs and also the BP-trained DNNs, including multiplayer perceptron, deep belief network, etc., without data augmentation applied, and are trainable using only CPU even for small-scale training sets.

SVM-based Deep Stacking Networks

TLDR
A novel SVM-based Deep Stacking Network (SVM-DSN), which uses the DSN architecture to organize linear SVM classifiers for deep learning and can iteratively extract data representations layer by layer as a deep neural network but with parallelizability.

Visual Representation and Classification by Learning Group Sparse Deep Stacking Network

TLDR
A group sparse DSN (GS-DSN) is constructed by stacking the group sparse SNNM modules and achieves the state-of-the-art performance (99.1%) on 15-Scene.

Deep Neural Factorization for Speech Recognition

TLDR
Experimental results show the merit of integrating the factorized features in deep feedforward and recurrent neural networks for speech recognition and a spectral-temporal factorized neural network (STFNN) is proposed.

Learning Deep Representation Without Parameter Inference for Nonlinear Dimensionality Reduction

TLDR
Experimental results show that the proposed deep model can learn better representations than deep belief networks and meanwhile can train a much larger network with much less time thanDeep belief networks.

Acoustic Event Detection Using T-DSN With Reverse Automatic Differentiation

  • Computer Science
  • 2017
The Tensor Deep Stacking Neural Network (T-DSN) enhance the conventional Deep Stacking Network by replacing one or more of its layers with a tensor layer, in which each input vector is projected into

Tensor Deep Stacking Networks and Kernel Deep Convex Networks for Annotating Natural Scene Images

TLDR
A deep convolutional network is used to extract high level features from different sub-regions of the images, and then used as inputs to these models, Tensor Deep Stacking Network and Kernel Deep Convex Network for the task of automatic image annotation.
...

References

SHOWING 1-10 OF 34 REFERENCES

A deep architecture with bilinear modeling of hidden representations: Applications to phonetic recognition

We develop and describe a novel deep architecture, the Tensor Deep Stacking Network (T-DSN), where multiple blocks are stacked one on top of another and where a bilinear mapping from hidden

Large Vocabulary Speech Recognition Using Deep Tensor Neural Networks

TLDR
Evaluation on 30hr Switchboard task indicates that DTNNs can outperform DNNs with similar number of parameters with 5% relative word error reduction, and is extended to deep tensor neural networks (DTNNs) in which one or more layers are double-projection and tensor layers.

Scalable stacking and learning for building deep architectures

  • L. DengDong YuJohn C. Platt
  • Computer Science
    2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2012
TLDR
The Deep Stacking Network (DSN) is presented, which overcomes the problem of parallelizing learning algorithms for deep architectures and provides a method of stacking simple processing modules in buiding deep architectures, with a convex learning problem in each module.

Deep Convex Net: A Scalable Architecture for Speech Pattern Classification

TLDR
Results on both MNIST and TIMIT tasks evaluated thus far demonstrate superior performance of DCN over the DBN (Deep Belief Network) counterpart that forms the basis of the DNN, reflected not only in training scalability and CPU-only computation, but more importantly in classification accuracy in both tasks.

Deep Convex Networks for Image and Speech Classification

TLDR
Experimental results on handwriting image recognition task (MNIST) and on phone state classification (TIMIT) demonstrate superior performance of DCN over DBN not only in training efficiency but also in classification accuracy.

Building high-level features using large scale unsupervised learning

TLDR
Contrary to what appears to be a widely-held intuition, the experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not.

Flexible, High Performance Convolutional Neural Networks for Image Classification

We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a

Accelerated Parallelizable Neural Network Learning Algorithm for Speech Recognition

TLDR
A set of novel, batch-mode algorithms developed recently are described as one key component in scalable, deep neural network based speech recognition, to structure the singlehidden-layer neural network so that the upper-layer's weights can be written as a deterministic function of the lower-layer’s weights.

Phone Recognition with the Mean-Covariance Restricted Boltzmann Machine

TLDR
This work uses the mean-covariance restricted Boltzmann machine (mcRBM) to learn features of speech data that serve as input into a standard DBN, and achieves a phone error rate superior to all published results on speaker-independent TIMIT to date.

Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition

TLDR
A pre-trained deep neural network hidden Markov model (DNN-HMM) hybrid architecture that trains the DNN to produce a distribution over senones (tied triphone states) as its output that can significantly outperform the conventional context-dependent Gaussian mixture model (GMM)-HMMs.