• Corpus ID: 245650405

Transformer Embeddings of Irregularly Spaced Events and Their Participants

@article{Yang2021TransformerEO,
  title={Transformer Embeddings of Irregularly Spaced Events and Their Participants},
  author={Chenghao Yang and Hongyuan Mei and Jason Eisner},
  journal={ArXiv},
  year={2021},
  volume={abs/2201.00044}
}
The neural Hawkes process (Mei & Eisner, 2017) is a generative model of irregularly spaced sequences of discrete events. To handle complex domains with many event types, Mei et al. further consider a setting in which each event in the sequence updates a deductive database of facts (via domain-specific pattern-matching rules); future events are then conditioned on the database contents. They show how to convert such a symbolic system into a neuro-symbolic continuous-time generative model, in… 

Figures and Tables from this paper

HYPRO: A Hybridly Normalized Probabilistic Model for Long-Horizon Prediction of Event Sequences

Experiments on multiple real-world datasets demonstrate that the proposed HYPRO model can significantly outperform previous models at making long-horizon predictions of future events.

Meta Temporal Point Processes

This work proposes to train TPPs in a meta learning framework, where each sequence is treated as a different task, via a novel framing of TPPs as neural processes (NPs).

Efficacy of novel attention-based gated recurrent units transformer for depression detection using electroencephalogram signals

An attention-based gated recurrent units transformer (AttGRUT) time-series model is proposed to efficiently identify EEG perturbations in depressive patients and outperformed the two baseline and two hybrid time- series models.

Transformer for Predictive and Prescriptive Process Monitoring in IT Service Management (Extended Abstract)

  • Marco Hennig
  • Computer Science
    ICPM Doctoral Consortium / Demo
  • 2022
A project to develop a pipeline supporting novel IT service management approaches using state-of-the-art predictive and prescriptive process monitoring based on transformer neural networks is outlined.

Exploring Generative Neural Temporal Point Process

This is the first work that adapts generative models in a complete unified framework and studies their effectiveness in the context of TPP, and revised the attentive models which summarize influence from historical events with an adaptive reweighting term considering events’ type relation and time intervals.

Transformers in Time Series: A Survey

This paper systematically review transformer schemes for time series modeling by highlighting their strengths as well as limitations through a new taxonomy to summarize existing time series transformers in two perspectives.

References

SHOWING 1-10 OF 50 REFERENCES

Transformer Hawkes Process

A Transformer Hawkes Process (THP) model is proposed, which leverages the self-attention mechanism to capture long-term dependencies and meanwhile enjoys computational efficiency.

Automatic differentiation in PyTorch

An automatic differentiation module of PyTorch is described — a library designed to enable rapid research on machine learning models that focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead.

Self-Attentive Hawkes Process

SAHP employs self-attention to summarise the influence of history events and compute the probability of the next event and is more interpretable than RNN-based counterparts because the learnt attention weights reveal contributions of one event type to the happening of another type.

CAUSE: Learning Granger Causality from Event Sequences using Attribution Methods

Across multiple datasets riddled with diverse event interdependency, it is demonstrated that CAUSE achieves superior performance on correctly inferring the inter-type Granger causality over a range of state-of-the-art methods.

Multivariate Hawkes processes

This thesis addresses theoretical and practical questions arising in connection with multivariate, marked, linear Hawkes processes, including the calculation of moment measures; and the existence and uniqueness of stationary solutions.

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.

Attention is All you Need

A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

Recurrent Marked Temporal Point Processes: Embedding Event History to Vector

The Recurrent Marked Temporal Point Process is proposed to simultaneously model the event timings and the markers, and uses a recurrent neural network to automatically learn a representation of influences from the event history, and an efficient stochastic gradient algorithm is developed for learning the model parameters.

What you Always Wanted to Know About Datalog (And Never Dared to Ask)

The syntax and semantics of Datalog and its use for querying a relational database are presented, and the most relevant methods for achieving efficient evaluations of Daloog queries are presented.

The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process

This generative model allows past events to influence the future in complex and realistic ways, by conditioning future event intensities on the hidden state of a recurrent neural network that has consumed the stream of past events.