A Deep Learning Approach to Analyzing Continuous-Time Systems

@article{Shain2022ADL,
  title={A Deep Learning Approach to Analyzing Continuous-Time Systems},
  author={Cory Shain and William Schuler},
  journal={ArXiv},
  year={2022},
  volume={abs/2209.12128}
}
Scientists often use observational time series data to study complex natural processes, from climate change to civil conflict to brain activity. But regression analyses of these data often assume simplistic dynamics. Recent advances in deep learning have yielded startling improvements to the performance of models of complex processes, from speech comprehension to nuclear physics to competitive gaming. But deep learning is generally not used for scientific analysis. Here, we bridge this gap by… 
1 Citations

On the Effect of Anticipation on Reading Times

Over the past two decades, numerous studies have demonstrated how less predictable (i.e. higher surprisal) words take more time to read. In general, these previous studies implicitly assumed the

References

SHOWING 1-10 OF 79 REFERENCES

CDRNN: Discovering Complex Dynamics in Human Language Processing

Behavioral and fMRI experiments reveal detailed and plausible estimates of human language processing dynamics that generalize better than CDR and other baselines, supporting a potential role for CDRNN in studying human languageprocessing.

Continuous-time deconvolutional regression for psycholinguistic modeling

This article motivates the use of CDR for many experimental settings, exposits some of its mathematical properties, and empirically evaluates the influence of various experimental confounds, hyperparameter settings, and response types.

fMRI reveals language-specific predictive coding during naturalistic sentence comprehension

Findings indicate that human sentence processing mechanisms generate predictions about upcoming words using cognitive processes that are sensitive to hierarchical structure and specialized for language processing, rather than via feedback from high-level executive control mechanisms.

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.

Long Short-Term Memory

A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.

Linear Systems Analysis of Functional Magnetic Resonance Imaging in Human V1

Results from three empirical tests support the hypothesis that fMRI responses in human primary visual cortex (V1) depend separably on stimulus timing and stimulus contrast, and the noise in the fMRI data is independent of stimulus contrast and temporal period.

Impulse Response Modeling of Dynamical Systemswith Convolutional Neural Networks

A convolutional neural network approach to derive the impulse response model of a dynamical system is introduced and the mean squared error for the output of the model as well as for the model parameters are shown to be small demonstrating the efficiency of the approach.

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

Practical Variational Inference for Neural Networks

This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks and revisits several common regularisers from a variational perspective.

Language Models are Unsupervised Multitask Learners

It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
...