• Corpus ID: 62974141

# Supervised Deep Neural Networks (DNNs) for Pricing/Calibration of Vanilla/Exotic Options Under Various Different Processes

@article{Hirsa2019SupervisedDN,
title={Supervised Deep Neural Networks (DNNs) for Pricing/Calibration of Vanilla/Exotic Options Under Various Different Processes},
author={Ali Hirsa and Tugce Karatas and Amir Oskoui},
journal={ArXiv},
year={2019},
volume={abs/1902.05810}
}
• Published 15 February 2019
• Computer Science
• ArXiv
We apply supervised deep neural networks (DNNs) for pricing and calibration of both vanilla and exotic options under both diffusion and pure jump processes with and without stochastic volatility. We train our neural network models under different number of layers, neurons per layer, and various different activation functions in order to find which combinations work better empirically. For training, we consider various different loss functions and optimization routines. We demonstrate that deep…

## Figures and Tables from this paper

• Computer Science
• 2022
We develop an unsupervised deep learning method to solve the barrier options under the Bergomi model. The neural networks serve as the approximate option surfaces and are trained to satisfy the PDE
• Computer Science
• 2020
This technique improves the quality of deep learning applied to option pricing problems and increases the accuracy of applying neural nets since a large portion of the price is already mimicked by the control variate.
• Computer Science
Quantitative Finance
• 2022
This work investigates solving partial integro-differential equations (PIDEs) using unsupervised deep learning and employs a neural network as the candidate solution and trains the neural network to satisfy the PIDE.
This dissertation considers Heston’s stochastic volatility model, and demonstrates how the calibration map from quoted implied volatilities to model parameters can be effectively learned using an ANN, and explores the possibility of approximating the leverage function using a series of ANNs.
• Computer Science
SSRN Electronic Journal
• 2019
This paper proposes a data-driven approach, by means of an Artificial Neural Network (ANN), to value financial options within the setting of interest rate term structure models. This aims to
• Computer Science
Journal of Mathematics in Industry
• 2019
The rapid on-line learning of implied volatility by ANNs, in combination with the use of an adapted parallel global optimization method, tackles the computation bottleneck and provides a fast and reliable technique for calibrating model parameters while avoiding, as much as possible, getting stuck in local minima.
• D. Bloch
• Economics
SSRN Electronic Journal
• 2019
Some of the existing methods using neural networks for pricing market and model prices, present calibration, and introduce exotic option pricing are reviewed, the feasibility of these methods, highlight problems, and propose alternative solutions are discussed.
• Economics
Technological Forecasting and Social Change
• 2020
This paper proposes a machine-learning method to price arithmetic and geometric average options accurately and in particular quickly and it is verified by empirical applications as well as numerical experiments.
• D. Bloch
• Economics
SSRN Electronic Journal
• 2019
A pricing model is tied to its ability of capturing the dynamics of the spot price process. Its misspecification will lead to pricing and hedging errors. Parametric pricing formula depends on the

## References

SHOWING 1-10 OF 29 REFERENCES

A framework within which machine learning may be used for finance, with specific application to option pricing is summarized, and a fully-connected feed-forward deep learning neural network is trained to reproduce the Black and Scholes (1973) option pricing formula to a high degree of accuracy.
• Computer Science
Quantitative Finance
• 2018
It is illustrated that for many classical problems, the price of extra speed is some loss of accuracy, but this reduced accuracy is often well within reasonable limits and hence very acceptable from a practical point of view.
• Computer Science
ICML
• 2019
This work proves why stochastic gradient descent can find global minima on the training objective of DNNs in $\textit{polynomial time}$ and implies an equivalence between over-parameterized neural networks and neural tangent kernel (NTK) in the finite (and polynomial) width setting.
• Computer Science
NIPS
• 2017
A convergence analysis for SGD is provided on a rich subset of two-layer feedforward networks with ReLU activations characterized by a special structure called "identity mapping" that proves that, if input follows from Gaussian distribution, with standard $O(1/\sqrt{d})$ initialization of the weights, SGD converges to the global minimum in polynomial number of steps.
• Computer Science
IEEE Transactions on Information Theory
• 2019
It is shown that with the quadratic activations, the optimization landscape of training, such shallow neural networks, has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics.
• Computer Science
ArXiv
• 2016
It is proved that for a MNN with one hidden layer, the training error is zero at every differentiable local minimum, for almost every dataset and dropout-like noise realization, and extended to the case of more than onehidden layer.
• Computer Science
BMVC
• 2016
This paper conducts a detailed experimental study on the architecture of ResNet blocks and proposes a novel architecture where the depth and width of residual networks are decreased and the resulting network structures are called wide residual networks (WRNs), which are far superior over their commonly used thin and very deep counterparts.
• Computer Science
AISTATS
• 2010
The objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future.
• Mathematics
• 2003
We derive a form of the partial integro-differential equation (PIDE) for pricing American options under variance gamma (VG) process. We then develop a numerical algorithm to solve for values of
• Computer Science
Neural Computation
• 2019
With a locally induced structure on deep nonlinear neural networks, the values of local minima of neural networks are theoretically proven to be no worse than the globally optimal values of corresponding classical machine learning models.