Modeling the Intensity Function of Point Process Via Recurrent Neural Networks

  title={Modeling the Intensity Function of Point Process Via Recurrent Neural Networks},
  author={Shuai Xiao and Junchi Yan and Xiaokang Yang and Hongyuan Zha and Stephen M. Chu},
Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. [] Key Method For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank…

Figures and Tables from this paper

Joint Modeling of Event Sequence and Time Series with Attentional Twin Recurrent Neural Networks

This paper utilizes the rich framework of (temporal) point processes to model event data and timely update its intensity function by the synergic twin Recurrent Neural Networks (RNNs) and proves the superiority of the model in synthetic data and three real-world benchmark datasets.

Learning Time Series Associated Event Sequences With Recurrent Point Process Networks

Focusing on challenging tasks such as temporal event prediction and underlying relational network mining, this work proposes a model which instantiates temporal point process models with temporal recurrent neural networks (RNNs) and demonstrates the superiority of the model on both synthetic and real-world data.

Extensive Deep Temporal Point Process

Discrete graph structure learning in the framework of Variational Inference is employed to reveal latent structures of Granger causality graph, and further experiments shows the proposed framework with learned latent graph can both capture the relations and achieve an improved fitting and predicting performance.

An Empirical Study: Extensive Deep Temporal Point Process

Temporal point process as the stochastic process on continuous domain of time is commonly used to model the asynchronous event sequence featuring with occurence timestamps. Because the strong

Variational Neural Temporal Point Process

A variational neural temporal point process (VNTPP) is proposed that outperforms other deep neural network based models and statistical processes on synthetic and real-world datasets and can generalize the representations of various event types.

Attentive Neural Point Processes for Event Forecasting

  • Gu
  • Computer Science
  • 2021
This paper proposes ANPP, an Attentive Neural Point Processes framework that leverages the time-aware self-attention mechanism to explicitly model the influence between every pair of historical events, result-ing in more accurate predictions of events and better inter- pretation ability.

Modeling Continuous Time Sequences with Intermittent Observations using Marked Temporal Point Processes

A novel unsupervised model and inference method for learning MTPP in presence of event sequences with missing events that outperforms the state-of-the-art MTPP frameworks for event prediction, missing data imputation, and provides stable optimization.

Modeling and Applications for Temporal Point Processes

This tutorial will start with an elementary introduction of TPP model, including the basic concepts of the model, the simulation method of event sequences, and the recent progress on the modeling and learning ofTPP, including neural network-based TPP models, generative adversarial networks for TPP, and deep reinforcement learning of TPP.

INITIATOR: Noise-contrastive Estimation for Marked Temporal Point Process

This work proposes INITIATOR - a novel training framework based on noise-contrastive estimation to resolve the problem resulted from the intractable likelihood function and shows a strong connection between the proposed INITiATOR and the exact MLE.

Fully Neural Network based Model for General Temporal Point Processes

This work proposes a novel RNN based model in which the time course of the intensity function is represented in a general manner and achieves competitive or superior performances compared to the previous state-of-the-art methods for both synthetic and real datasets.



Recurrent Marked Temporal Point Processes: Embedding Event History to Vector

The Recurrent Marked Temporal Point Process is proposed to simultaneously model the event timings and the markers, and uses a recurrent neural network to automatically learn a representation of influences from the event history, and an efficient stochastic gradient algorithm is developed for learning the model parameters.

Recurrent Neural Networks for Multivariate Time Series with Missing Values

Novel deep learning models are developed based on Gated Recurrent Unit, a state-of-the-art recurrent neural network that takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results.

Long Short-Term Memory

A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.

Learning Social Infectivity in Sparse Low-rank Networks Using Multi-dimensional Hawkes Processes

This paper proposes a convex optimization approach to discover the hidden network of social influence by modeling the recurrent events at different individuals as multidimensional Hawkes processes, emphasizing the mutual-excitation nature of the dynamics of event occurrence.

Recurrent neural networks and robust time series prediction

A robust learning algorithm is proposed and applied to recurrent neural networks, NARMA(p,q), which show advantages over feedforward neural networks for time series with a moving average component and are shown to give better predictions than neural networks trained on unfiltered time series.

Recurrent Neural Networks for driver activity anticipation via sensory-fusion architecture

A sensory-fusion architecture which jointly learns to anticipate and fuse information from multiple sensory streams and shows significant improvement over the state-of-the-art in maneuver anticipation by increasing the precision and recall.

Learning Triggering Kernels for Multi-dimensional Hawkes Processes

This paper focuses on the nonparametric learning of the triggering kernels, and proposes an algorithm MMEL that combines the idea of decoupling the parameters through constructing a tight upper-bound of the objective function and application of Euler-Lagrange equations for optimization in infinite dimensional functional space.

Patient Flow Prediction via Discriminative Learning of Mutually-Correcting Processes

A novel framework for modeling patient flow through various CUs and jointly predicting patients' destination CU transition and duration days is developed, and a novel discriminative learning algorithm aiming at improving the prediction of transition events in the case of sparse data is proposed.

Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks

This work proposes a curriculum learning strategy to gently change the training process from a fully guided scheme using the true previous token, towards a less guided scheme which mostly uses the generated token instead.

Predicting Clinical Events by Combining Static and Dynamic Information Using Recurrent Neural Networks

This work works with a database collected in the Charité Hospital in Berlin that contains complete information concerning patients that underwent a kidney transplantation, and develops an approach based on RNNs, specifically designed for the clinical domain, that combines static and dynamic information in order to predict future events.