Corpus ID: 8432110

Temporal Embedding in Convolutional Neural Networks for Robust Learning of Abstract Snippets

@article{Liu2015TemporalEI,
  title={Temporal Embedding in Convolutional Neural Networks for Robust Learning of Abstract Snippets},
  author={Jiajun Liu and Kun Zhao and Branislav Kusy and Ji-Rong Wen and Raja Jurdak},
  journal={ArXiv},
  year={2015},
  volume={abs/1502.05113}
}
The prediction of periodical time-series remains challenging due to various types of data distortions and misalignments. [...] Key Method Our model uses convolutional neural networks and embeds a time-series with its potential neighbors in the temporal domain for aligning it to the dominant patterns in the dataset. The model is robust to distortions and misalignments in the temporal domain and demonstrates strong prediction power for periodical time-series. We conduct extensive experiments and discover that…Expand
Traditional learning approaches over local trends of time series mainly make use of Hidden Markov Models ( HMMs
Local trends of time series characterize the intermediate upward and downward patterns of time series. Learning and forecasting the local trend in time series data play an important role in many realExpand
Hybrid Neural Networks for Learning the Trend in Time Series
TLDR
TreNet is proposed, a novel end-toend hybrid neural network to learn local and global contextual features for predicting the trend of time series, and demonstrates its effectiveness by outperforming CNN, L STM, the cascade of CNN and LSTM, Hidden Markov Model based method and various kernel based baselines on real datasets. Expand
Performance enhancing techniques for deep learning models in time series forecasting
TLDR
A fine-grained attention mechanism is presented that achieves a much better performance for multi-step forecasting tasks and an ensemble technique is proposed to further improve the performance of all the models. Expand
Comparison of Predictive Models for Forecasting Time-series Data
TLDR
Root Mean Square Error of the models for predictions are calculated for performance assessment which reveals the performance of these deep learning methods for forecasting based on time-series data. Expand
Robust Online Time Series Prediction with Recurrent Neural Networks
TLDR
The local features of time series are explored to automatically weight the gradients of the loss of the newly available observations with distributional properties of the data in real time to forecast streaming time series in the presence of anomalies and change points. Expand
Toward multi-label sentiment analysis: a transfer learning based approach
TLDR
This study proposes a transfer learning based approach tackling the aforementioned shortcomings of existing ABSA methods and proposes an advanced sentiment analysis method, namely Aspect Enhanced Sentiment Analysis (AESA) to classify text into sentiment classes with consideration of the entity aspects. Expand
Distributed Time Series Analytics
TLDR
This thesis proposes the P2H framework consisting of a parallelism-partitioning based data shufi¬‚ing and a hypercube structure based computation pruning method, so as to enhance both the communication and computation efi-ciency for mining correlations in the distributed context. Expand
Characterising and Predicting Urban Mobility Dynamics by Mining Bike Sharing System Data
  • Ida Bagus Irawan Purnama, N. Bergmann, R. Jurdak, Kun Zhao
  • Computer Science
  • 2015 IEEE 12th Intl Conf on Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom)
  • 2015
TLDR
This study focuses on identifying highly predictable BSS users, revealing their mobility characteristics and predicting their next-place movements, and compares between subscription and trip number based classification. Expand

References

SHOWING 1-10 OF 41 REFERENCES
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion
TLDR
This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations. Expand
Unsupervised feature learning for audio classification using convolutional deep belief networks
In recent years, deep learning approaches have gained significant interest as a way of building hierarchical representations from unlabeled data. However, to our knowledge, these deep learningExpand
Convolutional Neural Networks for Speech Recognition
TLDR
It is shown that further error rate reduction can be obtained by using convolutional neural networks (CNNs), and a limited-weight-sharing scheme is proposed that can better model speech features. Expand
Extracting and composing robust features with denoising autoencoders
TLDR
This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. Expand
Flexible, High Performance Convolutional Neural Networks for Image Classification
We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in aExpand
Greedy Layer-Wise Training of Deep Networks
TLDR
These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization. Expand
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations
TLDR
The convolutional deep belief network is presented, a hierarchical generative model which scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Expand
A unified architecture for natural language processing: deep neural networks with multitask learning
We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semanticExpand
Exploring Strategies for Training Deep Neural Networks
TLDR
These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy helps the optimization by initializing weights in a region near a good local minimum, but also implicitly acts as a sort of regularization that brings better generalization and encourages internal distributed representations that are high-level abstractions of the input. Expand
Stochastic Gradient Descent Training for L1-regularized Log-linear Models with Cumulative Penalty
TLDR
Experimental results demonstrate that the proposed SGD method can produce compact and accurate models much more quickly than a state-of-the-art quasi-Newton method for L1-regularized loglinear models. Expand
...
1
2
3
4
5
...