• Publications
  • Influence
DeepLOB: Deep Convolutional Neural Networks for Limit Order Books
TLDR
A large-scale deep learning model to predict price movements from limit order book (LOB) data of cash equities delivers a remarkably stable out-of-sample prediction accuracy and translates well to instruments that were not part of the training set, indicating the model's ability to extract universal features.
Deep Reinforcement Learning for Trading
TLDR
The experiments show that the proposed algorithms can follow large market trends without changing positions and can also scale down, or hold, through consolidation periods, and are equivalent if a linear utility function is used.
BDLOB: Bayesian Deep Convolutional Neural Networks for Limit Order Books
TLDR
This work demonstrates how dropout variational inference can be applied to a large-scale deep learning model that predicts price movements from limit order books (LOBs), the canonical data source representing trading and pricing movements, and is the first to apply Bayesian networks to LOBs.
Explicit Regularisation in Gaussian Noise Injections
TLDR
It is shown analytically and empirically that such regularisation produces calibrated classifiers with large classification margins and that the explicit regulariser derived is able to reproduce these effects.
Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction
TLDR
The Recurrent Neural Filter (RNF), a novel recurrent autoencoder architecture that learns distinct representations for each Bayesian filtering step, captured by a series of encoders and decoders is introduced.
Improving VAEs' Robustness to Adversarial Attack
TLDR
A new hierarchical VAE is introduced, the Seatbelt-VAE, which can produce high-fidelity autoencoders that are also adversarially robust to adversarial attacks and is confirmed on several different datasets and with current state-of-the-art VAE adversarial Attacks.
Port-Hamiltonian Neural Networks for Learning Explicit Time-Dependent Dynamical Systems
TLDR
The proposed port-Hamiltonian neural network can efficiently learn the dynamics of nonlinear physical systems of practical interest and accurately recover the underlying stationary Hamiltonian, time-dependent force, and dissipative coefficient.
The Deep Learning Limit: are negative neural network eigenvalues just noise?
TLDR
This work model the empirical risk surface of neural networks as a finite rank perturbation of the Gaussian Orthognal Ensemble and solves the problem analytically in the large dimension limit.
Enhancing Time Series Momentum Strategies Using Deep Neural Networks
TLDR
Backtesting on a portfolio of 88 continuous futures contracts, it is demonstrated that the Sharpe-optimised LSTM improved traditional methods by more than two times in the absence of transactions costs, and continue outperforming when considering transaction costs up to 2-3 basis points.
Deep Learning for Portfolio Optimization
TLDR
A framework that bypasses traditional forecasting steps and allows portfolio weights to be optimized by updating model parameters is presented and delivers good performance under transaction costs, and a detailed study shows the rationality of their approach during the crisis.
...
...