TSViz: Demystification of Deep Learning Models for Time-Series Analysis

@article{Siddiqui2019TSVizDO,
  title={TSViz: Demystification of Deep Learning Models for Time-Series Analysis},
  author={Shoaib Ahmed Siddiqui and Dominik Mercier and Mohsin Munir and Andreas R. Dengel and Sheraz Ahmed},
  journal={IEEE Access},
  year={2019},
  volume={7},
  pages={67027-67040}
}
This paper presents a novel framework for the demystification of convolutional deep learning models for time-series analysis. [...] Key Result As a sanity check for the computed influence values, we demonstrate our results on pruning of neural networks based on the computed influence information. These representations allow the user to better understand the network so that the acceptability of these deep models for time-series analysis can be enhanced. This is extremely important in domains, such as finance…Expand
TSXplain: Demystification of DNN Decisions for Time-Series using Natural Language and Statistical Features
TLDR
A novel framework i.e. Time-Series eXplanation (TSXplain) system which produces a natural language based explanation of the decision taken by a NN, merging the deep learning world with that of statistics. Expand
PatchX: Explaining Deep Models by Intelligible Pattern Patches for Time-series Classification
TLDR
This work proposes a novel hybrid approach that utilizes deep neural networks and traditional machine learning algorithms to introduce an interpretable and scale-able time-series classification approach. Expand
TSInsight: A local-global attribution framework for interpretability in time-series data
TLDR
Evaluation results show that TSInsight naturally achieves output space contraction, therefore, is an effective tool for the interpretability of deep time-series models. Expand
Conceptual Explanations of Neural Network Prediction for Time Series
TLDR
This paper presents a novel framework, i.e. Conceptual Explanation, designed to evaluate the effect of abstract (local or global) input features on the model behavior, which is model-agnostic and allows utilizing expert knowledge. Expand
TS-MULE: Local Interpretable Model-Agnostic Explanations for Time Series Forecast Models
TLDR
This work proposes TS-MULE, a local surrogate model explanation method specialized for time series extending the LIME approach, which works with various ways to segment and perturb the time series data. Expand
Explaining Time Series Predictions with Dynamic Masks
TLDR
With synthetic and real-world data, it is demonstrated that the dynamic underpinning of Dynamask, together with its parsimony, offer a neat improvement in the identification of feature importance over time. Expand
Sequential Interpretability: Methods, Applications, and Future Direction for Understanding Deep Learning Models in the Context of Sequential Data
TLDR
Current techniques for interpreting deep learning techniques involving sequential data are reviewed, similarities to non-sequential methods are identified, and current limitations and future avenues of sequential interpretability research are discussed. Expand
Deep learning for time series classification in ecology
TLDR
The high potential and wide applicability of deep learning for time series classification in ecology is demonstrated and this approach should be considered as an alternative to commonly used techniques requiring the transformation of time series data. Expand
Deep learning for supervised classification of temporal data in ecology
TLDR
This work describes some of the deep learning architectures relevant for time series classification and shows how these architectures and their hyper-parameters can be tested and used for the classification problems at hand. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 40 REFERENCES
Understanding Neural Networks Through Deep Visualization
TLDR
This work introduces several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations of convolutional neural networks. Expand
Visualizing and Understanding Convolutional Networks
TLDR
A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. Expand
Learning Important Features Through Propagating Activation Differences
TLDR
DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input, is presented. Expand
Methods for interpreting and understanding deep neural networks
TLDR
The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which the author provides theory, recommendations, and tricks, to make most efficient use of it on real data. Expand
Understanding deep learning requires rethinking generalization
TLDR
These experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data, and confirm that simple depth two neural networks already have perfect finite sample expressivity. Expand
Visualizing Deep Neural Network Decisions: Prediction Difference Analysis
TLDR
The prediction difference analysis method for visualizing the response of a deep neural network to a specific input highlights areas in a given input image that provide evidence for or against a certain class. Expand
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective. Expand
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
TLDR
This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets), and establishes the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks. Expand
Understanding Neural Networks through Representation Erasure
TLDR
This paper proposes a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. Expand
Understanding Black-box Predictions via Influence Functions
TLDR
This paper uses influence functions — a classic technique from robust statistics — to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Expand
...
1
2
3
4
...