• Corpus ID: 59316642

Visualizing Semantic Structures of Sequential Data by Learning Temporal Dependencies

@article{On2019VisualizingSS,
  title={Visualizing Semantic Structures of Sequential Data by Learning Temporal Dependencies},
  author={Kyoung-Woon On and Eun-Sol Kim and Yu-Jung Heo and Byoung-Tak Zhang},
  journal={ArXiv},
  year={2019},
  volume={abs/1901.09066}
}
While conventional methods for sequential learning focus on interaction between consecutive inputs, we suggest a new method which captures composite semantic flows with variable-length dependencies. In addition, the semantic structures within given sequential data can be interpreted by visualizing temporal dependencies learned from the method. The proposed method, called Temporal Dependency Network (TDN), represents a video as a temporal graph whose node represents a frame of the video and… 

Figures from this paper

References

SHOWING 1-8 OF 8 REFERENCES

YouTube-8M: A Large-Scale Video Classification Benchmark

YouTube-8M is introduced, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities, and various (modest) classification models are trained on the dataset.

Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling

These advanced recurrent units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU), are found to be comparable to LSTM.

Long Short-Term Memory

A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Rethinking the Inception Architecture for Computer Vision

This work is exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.

Layer Normalization

Training state-of-the-art, deep neural networks is computationally expensive. One way to reduce the training time is to normalize the activities of the neurons. A recently introduced technique called

Semi-Supervised Classification with Graph Convolutional Networks

A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.

Cnn architectures for largescale audio classification

  • In Acoustics, Speech and Signal Processing (ICASSP),
  • 2017