Title: | Multi-Level Monte Carlo |
---|---|
Description: | An implementation of MLMC (Multi-Level Monte Carlo), Giles (2008) <doi:10.1287/opre.1070.0496>, Heinrich (1998) <doi:10.1006/jcom.1998.0471>, for R. This package builds on the original 'Matlab' and 'C++' implementations by Mike Giles to provide a full MLMC driver and example level samplers. Multi-core parallel sampling of levels is provided built-in. |
Authors: | Louis Aslett [cre, aut, trl] , Mike Giles [ctb] , Tigran Nagapetyan [ctb] , Sebastian Vollmer [ctb] |
Maintainer: | Louis Aslett <[email protected]> |
License: | GPL-2 |
Version: | 2.1.1 |
Built: | 2024-11-11 18:18:07 UTC |
Source: | https://github.com/louisaslett/mlmc |
Financial options based on scalar geometric Brownian motion, similar to Mike Giles' MCQMC06 paper, Giles (2008), using a Milstein discretisation.
mcqmc06_l(l, N, option)
mcqmc06_l(l, N, option)
l |
the level to be simulated. |
N |
the number of samples to be computed. |
option |
the option type, between 1 and 5. The options are:
|
This function is based on GPL-2 C++ code by Mike Giles.
A named list containing:
sums
is a vector of length six where
are iid simulations with expectation
when
and expectation
when
, and
are iid simulations with expectation
.
Note that only the first two components of this are used by the main
mlmc()
driver, the full vector is used by mlmc.test()
for convergence tests etc;
cost
is a scalar with the total cost of the paths simulated, computed as for level
.
Louis Aslett <[email protected]>
Mike Giles <[email protected]>
Giles, M. (2008) 'Improved Multilevel Monte Carlo Convergence using the Milstein Scheme', in A. Keller, S. Heinrich, and H. Niederreiter (eds) Monte Carlo and Quasi-Monte Carlo Methods 2006. Berlin, Heidelberg: Springer, pp. 343–358. Available at: doi:10.1007/978-3-540-74496-2_20.
# These are similar to the MLMC tests for the MCQMC06 paper # using a Milstein discretisation with 2^l timesteps on level l # # The figures are slightly different due to: # -- change in MSE split # -- change in cost calculation # -- different random number generation # -- switch to S_0=100 # # Note the following takes quite a while to run, for a toy example see after # this block. N0 <- 200 # initial samples on coarse levels Lmin <- 2 # minimum refinement level Lmax <- 10 # maximum refinement level test.res <- list() for(option in 1:5) { if(option == 1) { cat("\n ---- Computing European call ---- \n") N <- 20000 # samples for convergence tests L <- 8 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } else if(option == 2) { cat("\n ---- Computing Asian call ---- \n") N <- 20000 # samples for convergence tests L <- 8 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } else if(option == 3) { cat("\n ---- Computing lookback call ---- \n") N <- 20000 # samples for convergence tests L <- 10 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } else if(option == 4) { cat("\n ---- Computing digital call ---- \n") N <- 200000 # samples for convergence tests L <- 8 # levels for convergence tests Eps <- c(0.01, 0.02, 0.05, 0.1, 0.2) } else if(option == 5) { cat("\n ---- Computing barrier call ---- \n") N <- 200000 # samples for convergence tests L <- 8 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } test.res[[option]] <- mlmc.test(mcqmc06_l, N, L, N0, Eps, Lmin, Lmax, option = option) # print exact analytic value, based on S0=K T <- 1 r <- 0.05 sig <- 0.2 K <- 100 B <- 0.85*K k <- 0.5*sig^2/r; d1 <- (r+0.5*sig^2)*T / (sig*sqrt(T)) d2 <- (r-0.5*sig^2)*T / (sig*sqrt(T)) d3 <- (2*log(B/K) + (r+0.5*sig^2)*T) / (sig*sqrt(T)) d4 <- (2*log(B/K) + (r-0.5*sig^2)*T) / (sig*sqrt(T)) if(option == 1) { val <- K*( pnorm(d1) - exp(-r*T)*pnorm(d2) ) } else if(option == 2) { val <- NA } else if(option == 3) { val <- K*( pnorm(d1) - pnorm(-d1)*k - exp(-r*T)*(pnorm(d2) - pnorm(d2)*k) ) } else if(option == 4) { val <- K*exp(-r*T)*pnorm(d2) } else if(option == 5) { val <- K*( pnorm(d1) - exp(-r*T)*pnorm(d2) - ((K/B)^(1-1/k))*((B^2)/(K^2)*pnorm(d3) - exp(-r*T)*pnorm(d4)) ) } if(is.na(val)) { cat(sprintf("\n Exact value unknown, MLMC value: %f \n", test.res[[option]]$P[1])) } else { cat(sprintf("\n Exact value: %f, MLMC value: %f \n", val, test.res[[option]]$P[1])) } # plot results plot(test.res[[option]]) } # The level sampler can be called directly to retrieve the relevant level sums: mcqmc06_l(l = 7, N = 10, option = 1)
# These are similar to the MLMC tests for the MCQMC06 paper # using a Milstein discretisation with 2^l timesteps on level l # # The figures are slightly different due to: # -- change in MSE split # -- change in cost calculation # -- different random number generation # -- switch to S_0=100 # # Note the following takes quite a while to run, for a toy example see after # this block. N0 <- 200 # initial samples on coarse levels Lmin <- 2 # minimum refinement level Lmax <- 10 # maximum refinement level test.res <- list() for(option in 1:5) { if(option == 1) { cat("\n ---- Computing European call ---- \n") N <- 20000 # samples for convergence tests L <- 8 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } else if(option == 2) { cat("\n ---- Computing Asian call ---- \n") N <- 20000 # samples for convergence tests L <- 8 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } else if(option == 3) { cat("\n ---- Computing lookback call ---- \n") N <- 20000 # samples for convergence tests L <- 10 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } else if(option == 4) { cat("\n ---- Computing digital call ---- \n") N <- 200000 # samples for convergence tests L <- 8 # levels for convergence tests Eps <- c(0.01, 0.02, 0.05, 0.1, 0.2) } else if(option == 5) { cat("\n ---- Computing barrier call ---- \n") N <- 200000 # samples for convergence tests L <- 8 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } test.res[[option]] <- mlmc.test(mcqmc06_l, N, L, N0, Eps, Lmin, Lmax, option = option) # print exact analytic value, based on S0=K T <- 1 r <- 0.05 sig <- 0.2 K <- 100 B <- 0.85*K k <- 0.5*sig^2/r; d1 <- (r+0.5*sig^2)*T / (sig*sqrt(T)) d2 <- (r-0.5*sig^2)*T / (sig*sqrt(T)) d3 <- (2*log(B/K) + (r+0.5*sig^2)*T) / (sig*sqrt(T)) d4 <- (2*log(B/K) + (r-0.5*sig^2)*T) / (sig*sqrt(T)) if(option == 1) { val <- K*( pnorm(d1) - exp(-r*T)*pnorm(d2) ) } else if(option == 2) { val <- NA } else if(option == 3) { val <- K*( pnorm(d1) - pnorm(-d1)*k - exp(-r*T)*(pnorm(d2) - pnorm(d2)*k) ) } else if(option == 4) { val <- K*exp(-r*T)*pnorm(d2) } else if(option == 5) { val <- K*( pnorm(d1) - exp(-r*T)*pnorm(d2) - ((K/B)^(1-1/k))*((B^2)/(K^2)*pnorm(d3) - exp(-r*T)*pnorm(d4)) ) } if(is.na(val)) { cat(sprintf("\n Exact value unknown, MLMC value: %f \n", test.res[[option]]$P[1])) } else { cat(sprintf("\n Exact value: %f, MLMC value: %f \n", val, test.res[[option]]$P[1])) } # plot results plot(test.res[[option]]) } # The level sampler can be called directly to retrieve the relevant level sums: mcqmc06_l(l = 7, N = 10, option = 1)
This function is the Multi-level Monte Carlo driver which will sample from the levels of user specified function.
mlmc( Lmin, Lmax, N0, eps, mlmc_l, alpha = NA, beta = NA, gamma = NA, parallel = NA, ... )
mlmc( Lmin, Lmax, N0, eps, mlmc_l, alpha = NA, beta = NA, gamma = NA, parallel = NA, ... )
Lmin |
the minimum level of refinement. Must be |
Lmax |
the maximum level of refinement. Must be |
N0 |
initial number of samples which are used for the first 3 levels and for any subsequent levels which are automatically added.
Must be |
eps |
the target accuracy of the estimate (root mean square error).
Must be |
mlmc_l |
a user supplied function which provides the estimate for level The user supplied function should return a named list containing one element named
See the function (and source code of) |
alpha |
the weak error, |
beta |
the variance, |
gamma |
the sample cost, |
parallel |
if an integer is supplied, R will fork |
... |
additional arguments which are passed on when the user supplied |
The Multilevel Monte Carlo Method method originated in the works Giles (2008) and Heinrich (1998).
Consider a sequence , which approximates
with increasing accuracy, but also increasing cost, we have the simple identity
and therefore we can use the following unbiased estimator for ,
where samples are produced at level
.
The inclusion of the level
in the superscript
indicates that the samples used at each level of correction are independent.
Set , and
to be the cost and variance of one sample of
, and
to be the cost and variance of one sample of
, then the overall cost and variance of the multilevel estimator is
and
, respectively.
The idea behind the method, is that provided that the product decreases with
, i.e. the cost increases with level slower than the variance decreases, then one can achieve significant computational savings, which can be formalised as in Theorem 1 of Giles (2015).
For further information on multilevel Monte Carlo methods, see the webpage https://people.maths.ox.ac.uk/gilesm/mlmc_community.html which lists the research groups working in the area, and their main publications.
This function is based on GPL-2 'Matlab' code by Mike Giles.
A named list containing:
P
The MLMC estimate;
Nl
A vector of the number of samples performed on each level;
Cl
Per sample cost at each level.
Louis Aslett <[email protected]>
Mike Giles <[email protected]>
Tigran Nagapetyan <[email protected]>
Giles, M.B. (2008) 'Multilevel Monte Carlo Path Simulation', Operations Research, 56(3), pp. 607–617. Available at: doi:10.1287/opre.1070.0496.
Giles, M.B. (2015) 'Multilevel Monte Carlo methods', Acta Numerica, 24, pp. 259–328. Available at: doi:10.1017/S096249291500001X.
Heinrich, S. (1998) 'Monte Carlo Complexity of Global Solution of Integral Equations', Journal of Complexity, 14(2), pp. 151–175. Available at: doi:10.1006/jcom.1998.0471.
mlmc(2, 6, 1000, 0.01, opre_l, option = 1) mlmc(2, 10, 1000, 0.01, mcqmc06_l, option = 1)
mlmc(2, 6, 1000, 0.01, opre_l, option = 1) mlmc(2, 10, 1000, 0.01, mcqmc06_l, option = 1)
Computes a suite of diagnostic values for an MLMC estimation problem.
mlmc.test( mlmc_l, N, L, N0, eps.v, Lmin, Lmax, alpha = NA, beta = NA, gamma = NA, parallel = NA, silent = FALSE, ... )
mlmc.test( mlmc_l, N, L, N0, eps.v, Lmin, Lmax, alpha = NA, beta = NA, gamma = NA, parallel = NA, silent = FALSE, ... )
mlmc_l |
a user supplied function which provides the estimate for level The user supplied function should return a named list containing one element named
See the function (and source code of) |
N |
number of samples to use in convergence tests, kurtosis, telescoping sum check. |
L |
number of levels to use in convergence tests, kurtosis, telescoping sum check. |
N0 |
initial number of samples which are used for the first 3 levels and for any subsequent levels which are automatically added in the complexity tests.
Must be |
eps.v |
a vector of one or more target accuracies for the complexity tests.
Must all be |
Lmin |
the minimum level of refinement for complexity tests.
Must be |
Lmax |
the maximum level of refinement for complexity tests.
Must be |
alpha |
the weak error, |
beta |
the variance, |
gamma |
the sample cost, |
parallel |
if an integer is supplied, R will fork |
silent |
set to TRUE to supress running output (identical output can still be printed by printing the return result) |
... |
additional arguments which are passed on when the user supplied |
See one of the example level sampler functions (e.g. opre_l()
) for example usage.
This function is based on GPL-2 'Matlab' code by Mike Giles.
An mlmc.test
object which contains all the computed diagnostic values.
This object can be printed or plotted (see plot.mlmc.test
).
Louis Aslett <[email protected]>
Mike Giles <[email protected]>
Tigran Nagapetyan <[email protected]>
# Example calls with realistic arguments # Financial options using an Euler-Maruyama discretisation tst <- mlmc.test(opre_l, N = 2000000, L = 5, N0 = 1000, eps.v = c(0.005, 0.01, 0.02, 0.05, 0.1), Lmin = 2, Lmax = 6, option = 1) tst plot(tst) # Financial options using a Milstein discretisation tst <- mlmc.test(mcqmc06_l, N = 20000, L = 8, N0 = 200, eps.v = c(0.005, 0.01, 0.02, 0.05, 0.1), Lmin = 2, Lmax = 10, option = 1) tst plot(tst) # Toy versions for CRAN tests tst <- mlmc.test(opre_l, N = 10000, L = 5, N0 = 1000, eps.v = c(0.025, 0.1), Lmin = 2, Lmax = 6, option = 1) tst <- mlmc.test(mcqmc06_l, N = 10000, L = 8, N0 = 1000, eps.v = c(0.025, 0.1), Lmin = 2, Lmax = 10, option = 1)
# Example calls with realistic arguments # Financial options using an Euler-Maruyama discretisation tst <- mlmc.test(opre_l, N = 2000000, L = 5, N0 = 1000, eps.v = c(0.005, 0.01, 0.02, 0.05, 0.1), Lmin = 2, Lmax = 6, option = 1) tst plot(tst) # Financial options using a Milstein discretisation tst <- mlmc.test(mcqmc06_l, N = 20000, L = 8, N0 = 200, eps.v = c(0.005, 0.01, 0.02, 0.05, 0.1), Lmin = 2, Lmax = 10, option = 1) tst plot(tst) # Toy versions for CRAN tests tst <- mlmc.test(opre_l, N = 10000, L = 5, N0 = 1000, eps.v = c(0.025, 0.1), Lmin = 2, Lmax = 6, option = 1) tst <- mlmc.test(mcqmc06_l, N = 10000, L = 8, N0 = 1000, eps.v = c(0.025, 0.1), Lmin = 2, Lmax = 10, option = 1)
Financial options based on scalar geometric Brownian motion and Heston models, similar to Mike Giles' original 2008 Operations Research paper, Giles (2008), using an Euler-Maruyama discretisation
opre_l(l, N, option)
opre_l(l, N, option)
l |
the level to be simulated. |
N |
the number of samples to be computed. |
option |
the option type, between 1 and 5. The options are:
|
This function is based on GPL-2 'Matlab' code by Mike Giles.
A named list containing:
sums
is a vector of length six where
are iid simulations with expectation
when
and expectation
when
, and
are iid simulations with expectation
.
Note that only the first two components of this are used by the main
mlmc()
driver, the full vector is used by mlmc.test()
for convergence tests etc;
cost
is a scalar with the total cost of the paths simulated, computed as for level
.
Louis Aslett <[email protected]>
Mike Giles <[email protected]>
Tigran Nagapetyan <[email protected]>
Giles, M.B. (2008) 'Multilevel Monte Carlo Path Simulation', Operations Research, 56(3), pp. 607–617. Available at: doi:10.1287/opre.1070.0496.
# These are similar to the MLMC tests for the original # 2008 Operations Research paper, using an Euler-Maruyama # discretisation with 4^l timesteps on level l. # # The differences are: # -- the plots do not have the extrapolation results # -- two plots are log_2 rather than log_4 # -- the new MLMC driver is a little different # -- switch to X_0=100 instead of X_0=1 # # Note the following takes quite a while to run, for a toy example see after # this block. N0 <- 1000 # initial samples on coarse levels Lmin <- 2 # minimum refinement level Lmax <- 6 # maximum refinement level test.res <- list() for(option in 1:5) { if(option == 1) { cat("\n ---- Computing European call ---- \n") N <- 1000000 # samples for convergence tests L <- 5 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } else if(option == 2) { cat("\n ---- Computing Asian call ---- \n") N <- 1000000 # samples for convergence tests L <- 5 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } else if(option == 3) { cat("\n ---- Computing lookback call ---- \n") N <- 1000000 # samples for convergence tests L <- 5 # levels for convergence tests Eps <- c(0.01, 0.02, 0.05, 0.1, 0.2) } else if(option == 4) { cat("\n ---- Computing digital call ---- \n") N <- 4000000 # samples for convergence tests L <- 5 # levels for convergence tests Eps <- c(0.02, 0.05, 0.1, 0.2, 0.5) } else if(option == 5) { cat("\n ---- Computing Heston model ---- \n") N <- 2000000 # samples for convergence tests L <- 5 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } test.res[[option]] <- mlmc.test(opre_l, N, L, N0, Eps, Lmin, Lmax, option = option) # print exact analytic value, based on S0=K T <- 1 r <- 0.05 sig <- 0.2 K <- 100 k <- 0.5*sig^2/r; d1 <- (r+0.5*sig^2)*T / (sig*sqrt(T)) d2 <- (r-0.5*sig^2)*T / (sig*sqrt(T)) if(option == 1) { val <- K*( pnorm(d1) - exp(-r*T)*pnorm(d2) ) } else if(option == 2) { val <- NA } else if(option == 3) { val <- K*( pnorm(d1) - pnorm(-d1)*k - exp(-r*T)*(pnorm(d2) - pnorm(d2)*k) ) } else if(option == 4) { val <- K*exp(-r*T)*pnorm(d2) } else if(option == 5) { val <- NA } if(is.na(val)) { cat(sprintf("\n Exact value unknown, MLMC value: %f \n", test.res[[option]]$P[1])) } else { cat(sprintf("\n Exact value: %f, MLMC value: %f \n", val, test.res[[option]]$P[1])) } # plot results plot(test.res[[option]]) } # The level sampler can be called directly to retrieve the relevant level sums: opre_l(l = 7, N = 10, option = 1)
# These are similar to the MLMC tests for the original # 2008 Operations Research paper, using an Euler-Maruyama # discretisation with 4^l timesteps on level l. # # The differences are: # -- the plots do not have the extrapolation results # -- two plots are log_2 rather than log_4 # -- the new MLMC driver is a little different # -- switch to X_0=100 instead of X_0=1 # # Note the following takes quite a while to run, for a toy example see after # this block. N0 <- 1000 # initial samples on coarse levels Lmin <- 2 # minimum refinement level Lmax <- 6 # maximum refinement level test.res <- list() for(option in 1:5) { if(option == 1) { cat("\n ---- Computing European call ---- \n") N <- 1000000 # samples for convergence tests L <- 5 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } else if(option == 2) { cat("\n ---- Computing Asian call ---- \n") N <- 1000000 # samples for convergence tests L <- 5 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } else if(option == 3) { cat("\n ---- Computing lookback call ---- \n") N <- 1000000 # samples for convergence tests L <- 5 # levels for convergence tests Eps <- c(0.01, 0.02, 0.05, 0.1, 0.2) } else if(option == 4) { cat("\n ---- Computing digital call ---- \n") N <- 4000000 # samples for convergence tests L <- 5 # levels for convergence tests Eps <- c(0.02, 0.05, 0.1, 0.2, 0.5) } else if(option == 5) { cat("\n ---- Computing Heston model ---- \n") N <- 2000000 # samples for convergence tests L <- 5 # levels for convergence tests Eps <- c(0.005, 0.01, 0.02, 0.05, 0.1) } test.res[[option]] <- mlmc.test(opre_l, N, L, N0, Eps, Lmin, Lmax, option = option) # print exact analytic value, based on S0=K T <- 1 r <- 0.05 sig <- 0.2 K <- 100 k <- 0.5*sig^2/r; d1 <- (r+0.5*sig^2)*T / (sig*sqrt(T)) d2 <- (r-0.5*sig^2)*T / (sig*sqrt(T)) if(option == 1) { val <- K*( pnorm(d1) - exp(-r*T)*pnorm(d2) ) } else if(option == 2) { val <- NA } else if(option == 3) { val <- K*( pnorm(d1) - pnorm(-d1)*k - exp(-r*T)*(pnorm(d2) - pnorm(d2)*k) ) } else if(option == 4) { val <- K*exp(-r*T)*pnorm(d2) } else if(option == 5) { val <- NA } if(is.na(val)) { cat(sprintf("\n Exact value unknown, MLMC value: %f \n", test.res[[option]]$P[1])) } else { cat(sprintf("\n Exact value: %f, MLMC value: %f \n", val, test.res[[option]]$P[1])) } # plot results plot(test.res[[option]]) } # The level sampler can be called directly to retrieve the relevant level sums: opre_l(l = 7, N = 10, option = 1)
mlmc.test
objectProduces diagnostic plots on the result of an mlmc.test
function call.
## S3 method for class 'mlmc.test' plot(x, which = "all", cols = NA, ...)
## S3 method for class 'mlmc.test' plot(x, which = "all", cols = NA, ...)
x |
an |
which |
a vector of strings specifying which plots to produce, or
|
cols |
the number of columns across to plot to override the default value. |
... |
additional arguments which are passed on to plotting functions. |
Most of the plots produced are relatively self-explanatory. However, the consistency and kurtosis plots in particular may require some background. It is highly recommended to refer to Section 3.3 of Giles (2015), where the rationale for these diagnostic plots is addressed in full detail.
No return value, called for side effects.
Louis Aslett <[email protected]>
Giles, M.B. (2015) 'Multilevel Monte Carlo methods', Acta Numerica, 24, pp. 259–328. Available at: doi:10.1017/S096249291500001X.
tst <- mlmc.test(opre_l, N = 2000000, L = 5, N0 = 1000, eps.v = c(0.005, 0.01, 0.02, 0.05, 0.1), Lmin = 2, Lmax = 6, option = 1) tst plot(tst)
tst <- mlmc.test(opre_l, N = 2000000, L = 5, N0 = 1000, eps.v = c(0.005, 0.01, 0.02, 0.05, 0.1), Lmin = 2, Lmax = 6, option = 1) tst plot(tst)