Sequential Monte Carlo Methods in Options Pricing

By: Jacinth Grace Leong Mei En

Supervisor: Dr Jasra Ajay

ST4199 Honours Project in Statistics

Department of Statistics and Applied Probability

National University of Singapore 2018/2019

Content Page

Acknowledgements4

Summary5

Preliminaries & Literature Review6

Options and Financial Markets6

Black-Scholes Model7

European Call Options7

Barrier Options8

Monte Carlo Methods 9

Conventions and Overview11

Importance Sampling12

Sequential Monte Carlo Methods13

Monte Carlo Methods15

Properties of Estimate15

European Call Options16

Evaluation of Results17

Barrier Options19

Evaluation of Results20

Importance Sampling21

Selection of the Importance Function22

Importance Sampling for Barrier Options24

Properties of Estimate26

Monte Carlo and Importance Sampling – A Comparison28

Sequential Monte Carlo Methods30

Sequential Importance Sampling (SIS)32

Formulation33

SMC Samplers35

Formulation36

Algorithm 37

SMC Sampler for Barriers38

Conclusion40

References41

Acknowledgements

Firstly, I would like to express my sincerest gratitude to my supervisor, Dr Jasra Ajay, for having me as his honours student and for his patience and guidance throughout the period of this project.

Secondly, I would like to thank my family for their support and many sacrifices throughout my course of study.

Thirdly, my friends – for pushing me and motivating me. It was tiring to juggle the many commitments, but they kept me going on days I felt I hit a wall, whether they know it or not.

Last but not least, the faculty members in NUS’ Department of Statistics and Applied Probability. For helping to build the foundation necessary to complete this paper, and for the challenging yet rewarding experience.

This project has been a good reminder of why I chose to major in Statistics, and I hope to put everything I have learnt here to use in the future.

Summary

This paper follows the approximation of option prices in finance using Monte Carlo methods (MC). With the proliferation of computational power, Bayesian computation has become increasingly popular across industries. The aim of this paper is to justify the use of Monte Carlo simulation in option pricing, specifically importance sampling methods and Sequential Monte Carlo methods, which is a form of importance sampling. Algorithm and results in this paper were produced in R.

Chapter 1 serves as an overview as well as a literature review of past work done on this topic, as well as a brief overview of the methods used throughout this paper.

Chapter 2 gives goes through MC methods and applies the standard MC methods to Vanilla call options and barrier options.

Chapter 3 looks at importance sampling for barrier options and applies the method proposed to barrier options.

Chapter 4 details SMC methods and applies it to barrier options, before chapter 5 concludes the paper and proposes possible future direction in the field.

Preliminaries ; Literature Review

Options and Financial Markets

Options are a form of financial derivatives that gives the buyer the right, but not the obligation to buy or sell an underlying asset for a strike price at a specified date. The underlying asset can be a variety of financial instruments – foreign exchange, equities, fixed income and even commodities, though the term stock or underlying would be used interchangeably from herein for convenience. Options are seen as a powerful hedging tool, and is also used as speculation. Hedging helps to reduce risk exposure as holders can exploit negative correlations in payoffs to protect their positions. Speculation through options gives investors a less costly option when placing bets on the direction they believe the market is likely to move. Options are traded on either organized exchanges, such as Chicago Board Options Exchange (CBOE), or over-the-counter markets.

Options exist in many different forms, but this paper does not serve to shed light on the numerous types that exist. The MC methods explored will be applied to (1) European Call options, and (2) Barrier Options.

Option prices can be expressed as the expected value of its discounted payoffs. Under the assumptions of a complete market and a constant rate of interest, the value of an option with expiration T has the form

e-rTE f St0?t?Twhere f:R??R+ is the payoff function and the expectation with respect to (w.r.t.) the risk neutral measure. The payoff function differs for different types of options, depending on the option’s attributes.

12 gives a good overview of options as well as some of the basic concepts behind the derivation of option prices. This paper applies Monte Carlo methods to European call options and European Barrier options, specifically down and out barriers.

Black Scholes Model

The ubiquitous Black-Scholes Model is a mathematical model of financial derivative markets. Economists Black and Myron Scholes first first came up with the partial differential equation (PDE)

?V?t+12?2S2?2V?S2+rS?V?S-rV=0where V is the price of the option as a function of the stock price S at time t, r is the risk-free rate, and ? is the volatility of the stock. There are underlying assumptions to this model, and work has been done to relax these assumptions over time. Most approaches use the PDE to compute the expected value of the discounted terminal payoff under a risk-neutral measure. Throughout this paper, the expected present value payoffs of the options use Black-Scholes assumptions on the dynamics of the underlying stock.

European Call Options

European call options are a form of vanilla options that have a maturity T in which the holder has the right to exercise the option and buy the underlying

, henceforth called stock, at a price K, the strike price. The payoff to the holder would then be

(ST-K)+=max?(0, ST-K)Intuitively, this means the payoff to the holder would either be 0 or ST-K . If the stock’s price at maturity exceeds the strike price, the holder’s payoff would be positive. If ST>K, the holder would choose not to exercise the option, resulting in a 0 payoff.

In pricing European call options, the Black-Scholes model reduces to

C0= S0?d1- Ke-rT?d2d1=lnS0X+(r+ ?22)T?Td2= d1-?TC0 the current call option value, ?d the cumulative normal distribution, or ?(z?d). Following the Black Scholes model’s assumptions, all assets follow correlated geometric Brownian motion processes with constant volatilities over the life of the option, and the risk-free rate is constant.

Barrier Options

Zero pay-out if H is crossed before T (ceases to exist) Can only be exercised if stock price crosses H before T

H ; S0 Down and Out Down and In

H ; S0 Up and Out Up and In

Table 1: An overview of the types of Barrier options

Barrier options are a form of exotic options that is weakly path-dependent, and the payoff is determined by whether or not the price of the stock crosses a barrier before the expiration. Our analysis will focus on European down and out barrier options.

From the underlying Black Scholes Model, we have the estimate

e-rTEST-K)+k=1mIa,bStk= e-rT(ST-K)+k=1mIa,bstk pstkstk-1dst1…dsTas the present value of the expected barrier option price. Over the lifetime of the option, it will be knocked out if the underlying price goes below the lower barrier a or above the upper barrier b, and the resulting payoff would be zero. In the case of the barrier option we are observing, b=?. pstkstk-1 represents the probability that the underlying’s price does not the barrier at time interval k given its price one time step ago, Pstk?a,b|stk-1. The probability term is included in the integral as barrier options are path-dependent, and the

When there is a high probability of knock-out, the standard Monte Carlo estimate would have a high variance since a larger number of simulated paths would have zero payoffs. 2 introduced continuity correction into continuous barrier options in order to price discretely monitored barrier options. The literature does so by moving the barrier by a factor between time intervals.

Monte Carlo Methods

Monte Carlo methods have gained popularity as a computational tool to evaluate the volume of a set by interpreting its volume as an expectation. One of the first work done on using Monte Carlo methods to price options was by Boyle 7. The literature is a good early review on using Monte Carlo methods to price derivatives. Numerous work has been done in the evaluation of options price using Monte Carlo following this 1, 10, 14, 15. The principle of MC methods lies in multiple simulations of the underlying model, and averaging overall the simulations to get an unbiased estimate that converges to the true value as the number of simulations increases. After a finite number of simulations, we are able to get information on the likely magnitude of the error in the estimate from the Central Limit Theorem 1.

MC methods branch out into three problem classes – optimization, integration and generating draws from a probability distribution. Chapters 3 and 5 of 11 provide a good understanding on MC methods.

In this paper, in order to value the options, we determine stochastic processes for the underlying asset before simulating these paths. In the pricing of many financial derivatives, problems may arise when evaluating the expected prices of the underlying asset for complex financial instruments, making the calculation of their estimates intractable. Monte Carlo methods provide a powerful and flexible approach in dealing with complex problems and their advantages are more distinct in higher dimensions (increased complexity of underlying problem).

In classical Monte Carlo integration, a generic problem would be evaluating the integral

?fhX= S hxfxdxwhere hX is an arbitrary function and fX is a probability density function, with fxdx=1.

For the estimate, a sample (X1,…, Xn) is drawn independently from the density f and the estimate is calculated using

?= 1Ni=1Nh(Xi)By the Strong Law of Large Numbers, hm converges almost surely to ?fhX.

var?= S (hx- ?fhX)2fx dxWhen h2 has a finite expectation under f, the variance can be estimated from the samples using

? 2= 1n-1i=1nhxi- ?2For large n, n-1 in the above formula can be replaced with n, and

?-?fhX?/n is approximately standard normal. Robert & Casella, Monte Carlo Statistical Methods (2004) provides insight into the variety of Monte Carlo methods available. The estimates for the standard MC, IS and SMC methods in this paper are unbiased.

Conventions and Overview

Financial asset prices can be defined as a stochastic process {St}t?0, T, defined on a probability space (?, F, P). S0 is the initial price of the underlying asset, while T is the terminal time (expiration date) of the option.

From the Black-Scholes PDE, a stock’s risk-neutral dynamics is described using the following stochastic differential equation (SDE):

d StSt=r dt+ ? dWt,Wt~N(0, T)where Wt is a standard Brownian motion process.

d StSt can be interpreted as the returns on the underlying asset. By Itô’s formula, the SDE CITATION Kai71 l 1033 (Kailath, 1971) reduces to

ST=S0expr-12?2T+?Wt=S0expr-12?2T+?TZwhereby the Geometric Brownian motion follows a lognormal density. Let Z~N(0,1), then TZ~N(0, T).

Importance Sampling

In using standard Monte Carlo methods, the estimate can be subjected to high variability in certain cases, resulting in a need for us to look for us to look into variance reduction techniques. Importance sampling (IS) is a well-known Monte Carlo method that seeks to reduce the variance in the estimate by allowing more “important” outcomes to have greater sway on the estimate. Another way of viewing importance sampling is for observing the expectation of a probability measure under another through a likelihood ratio or Radon-Nikodym derivative 24. This idea is applicable in finance as an estimation of prices as expectations under a martingale measure. One of the first applications of importance sampling to the pricing of barrier options is 15

Other variance reduction techniques that have been employed in the pricing of derivatives include stratification, control variates, antithetic variates and quasi-monte carlo methods, see chapter 4 of 1 for such techniques and their applications in finance. These variance reduction techniques are often looking to reduce the MC variance of the estimate.

In this paper, we look at importance sampling 8, as well as Sequential Monte Carlo methods (SMC) 3 with the goal of reducing the variance in weights from the IS estimate. When looking at regular IS estimates, the inability to acquire the estimate recursively leads to increased computational complexity as the number of time steps increase. More work has been done on utilizing importance sampling in financial engineering, though the work done on applying SMC methods to option pricing is limited.

Sequential Monte Carlo Methods

Sequential Monte Carlo (SMC) methods, also known as particle filters (will be used interchangeably throughout this paper), are a form of Bayesian filtering methods formed from the idea of IS. SMC methods have been applied to a wide range of sequential inference problems across different fields, see 5. There are different types of SMC methods, such as Rao-Blackwellised particle filters that is concerned with conditionally Gaussian models, and bootstrap filters that utilizes the dynamic model as the importance distribution. This paper focuses on SMC samplers, which involves sequential importance sampling and resampling in the estimation of the target distribution. Mathematically, SMC methods can be regarded as a form of Feynman-Kac models. Traditionally, Kalman filters allow users to recursively generate a sequence of posterior distributions in order to get an analytical solution. However, a caveat to this is that the data have to follow a linear Gaussian state-space models. SMC methods are a class of sequentially calculated that generalizes the Kalman filter and Hidden Markov Model filter to non-linear, non-Gaussian state-space models. SMC methods approximate the sequence of probability distributions using a large set of random samples, equivalently named particles. These particles are propagated over time through importance sampling and resampling. Theoretically and under weak assumptions, as the number of particles tends to infinity, these particle approximations will converge towards the sequence of probability distributions almost surely. In reality, the feasibility of sampling these particles come into play, making it necessary to design efficient sampling and resampling strategies so particles are sampled from regions of high probability mass.

The first particle filter was introduced by 19, and the proof of consistency was subsequently given in 20. 5 provides a comprehensive overview of SMC and its developments through the years. Work has also been done in search of more efficient SMC methods. 17 uses block sampling strategies to improve the efficiency of the SMC methods, without having to perform any local Monte Carlo Integration.

Monte Carlo Methods

In the approximation of financial asset prices, the central issue is often accurately estimating stochastic differential equations by discrete-time processes. While there are analytical solutions to most pricing problems in finance, exploring numerical methods such as MC methods may be beneficial for intractable problems. Additionally, numerical methods are becoming more practical as improvements in computer systems increase their viability.

Properties of Estimate

Suppose we have an estimate of the option price as

CN=1ni=1Nf(STi)

And suppose that the true value is denoted C. By the Strong Law of Large Numbers, CN?C as N? ?. This property shows that the estimate is consistent.

By the central limit theorem, (CN-C)?fN, the standardized estimator, converges in distribution to the standard normal distribution as the number of samples, N, increases.

From this result, CN-C ~ N(0, ?fN). The O(1N) convergence rate puts Monte Carlo methods at an advantage especially when dealing with multi-dimensional integrals. Moreover, this intuitively provides us with the fact that in comparing two estimators, all else being equal, the estimator with a lower variance is the superior estimator 1. It is also noted that the Monte Carlo estimate is unbiased, that is ?CN= C.

European Call Options

The present value of the payoff presented in section 2.1 would then have to be multiplied by a discount factor e-rT, where r is the continuously compounded interest rate. The expected present value of payoff would then be

e-rTE(ST-K)+and the MC estimate of the call option is therefore

e-rTNi=1N(STi-K)+where N is the number of simulations and ST(1), …, ST(N) are independent and identically distributed samples.

It is noted that Monte Carlo is not superior to other methods in this example, and the simulation of the vanilla call option is to convince ourselves of the accuracy of Monte Carlo methods before commencing to more complex problems. We have performed the simulation of the estimated call option price for various values of N, K, S0, T, r and ?, comparing our estimate to the Black-Scholes value in 1.1.2.

Evaluation of Results

S0K T r ?N Black-Scholes Monte Carlo Standard Error

100 110 1 0.015 0.3 1000 8.679 9.276 0.5805

10,000 8.765 0.1836

100,000 8.674 0.0576

100 110 1 0.01 0.3 1000 8.498 9.086 0.5750

10,000 8.582 0.1819

100,000 8.491 0.0571

100 90 1 0.01 0.3 1000 17.54 18.401 0.7763

10,000 17.619 0.2441

100,000 17.519 0.0768

100 90 1 0.01 0.2 1000 14.19 14.795 0.5276

10,000 14.244 0.1654

100,000 14.173 0.0521

Table 2: Black-Scholes and Monte Carlo Estimates for different values of S0, K, T, r, ? and N.

As mentioned in section 2.3, the rate of convergence of the MC estimate is O(1n) by CLT.

This means the standard error of the estimate tends to zero as n tends to infinity. For example, increasing N in table 1 from 1000 to 10,000 approximately reduces the standard error by a factor of 10 ?3.16. Table 1 shows the variability in the Monte Carlo estimate and its standard error, ?fN, for varying parameters and sample sizes. The relationships between the option prices and the MC estimates follow its relationship with the Black-Scholes price. The standard error for each increment in N by a factor of 10 follows the convergence rate stated above.

Figure 1: Variation of Monte Carlo estimate with varying number of iterations N.

Figure 1 shows the Monte Carlo with parameters S0=100, k=120, T=1, r=0.015, ?=0.3 estimate converging to the Black-Scholes estimate with increased iterations, where visually the estimate stabilizes around the Black-Scholes estimate as the number of iterations increases.

Barrier Options

Now that we are convinced of the accuracy of Monte Carlo methods in the approximation of integrals, we will move on to barrier options. The MC estimate for the down and out barrier option is

?MC=e-rTNi=1N(STi-K)+k=1TIa,b(Sk i)In order to calculate Stki, we have to discretize the time interval 0, T into m uniform subintervals (or time steps) each of length ?t=T/m, where each grid point is represented as ti=i?t ? i= 0, 1, …, m. For convenience, let Si=Sti herein. From equation (3), we get

Sn+1= Snexpr-12?2?t+??t Zn, n=0, 1,…, m-1where Zn~N(0,1). Starting from the initial price of S0, the approximate discretized version of the dynamics moves the random vector forward. In our simulation, the moment the underlying’s price crosses the barrier, the path is abandoned to save computational time since the payoff is zero.

Evaluation of Results

S0K a T m r ?N Monte Carlo Estimate Standard Error Knock- out ratio

100 110 50 1 100 0.015 0.3 1000 7.683 0.512 0.0200

10,000 8.713 0.181 0.0227

100,000 8.775

0.0584 0.0214

100 110 90 1 100 0.015 0.3 1000 6.008 0.7770.01

10,000 6.361 0.2670.0092

100,000 6.663 0.08460.00911

100 110 90 1 252 0.015 0.3 1000 5.296 0.468 0.729

10,000 6.464 0.171 0.7208

100,000 6.513 0.0548 0.7248

100 90 85 1 252 0.01 0.2 1000 12.001 0.495 0.425

10,000 13.092 0.167 0.4102

100,000 13.142 0.0536 0.41253

Table 3: Barrier options – Standard Monte Carlo Estimates for different values of S0, K, T, r, ?, N, a nd m. Knockout ratio= #knocked out pathsNThe results in table 1 show the estimates of the down and out barrier option, using similar S0, K, T, r, ? and N as the results in section 2.3.1. In general, barrier options are less costly than vanilla options even when the barrier is far from S0, which aligns with the intuitive reasoning that the potential payoff could be zero given a non-zero probability of crossing the barrier. When the barrier is near S0 or the number of discretized steps is increased, the option price is less costly.

When simulating the underlying’s paths, the entire path is abandoned and the payoff is set to zero the moment the underlying hits below the barrier. This is to make the code more efficient since the option is extinguished when the barrier is hit.

Importance Sampling

Suppose we start with the problem of estimating a parameter ?,

?= Ef?X= E ?x fx dx = E ?x fx dxg(x)gx dx = Eg?X fX g(X)= Eg?XwXwhere f and g are pdfs on the domain of the measurable space E. f is the target distribution and g is the importance sampling distribution.

The final expression above signifies the expectation wrt the importance sampling distribution g(x) for the function ?XwX, a function of the random variable X. wX is introduced into the expectation to compensate for the bias when introducing the importance distribution. However, f is usually only known point-wise up to a normalizing constant Zm in practice, and it is necessary to normalize the importance weights.

The importance sampling estimator is then

?IS= 1Ni=1N?XiwXi 1Nj=1NwXj=i=1N?XiwXiwhere Xi ? i=1,…, N are IID samples from g and wXi is the normalized importance weights,

wXi=wXij=1NwXj and the weight wXi= fXig(Xi) is the likelihood ratio, or the Radon-Nikodym derivative evaluated at Xi. The normalized IS estimate was used here as it is difficult to derive the normalizing constants of the functions fx and g(x) in practice. It is noted that the un-normalized version of the above is an unbiased estimator of ?

Eg1Ni=1N?Yi fYi gYi=1NEgi=1N?Yi fYi gYi= 1NN?=?However, the normalized importance sampling estimate is usually biased,

bias?IS=Ef?IS-??0but asymptotically, and under weak assumptions, the strong law of large numbers and CLT still hold, i.e. ?IS?? as N?? and N?IS-?= O1N as N?? respectively. The bias does not matter in practice since the rate of convergence under CLT is independent of the dimension of the integrand.

Therefore, the key to the variance reduction when employing importance sampling is the selection of the proposal density g(x). Specifically, sampling from the proposal distribution should be easy and the the estimator should have a reduced variance for a reasonable estimate of the target distribution.

Selection of the Importance Function

Efficiency of IS estimates is reliant on the importance function as shown in section 3. An inappropriate importance function could cause the resulting estimate’s variance to explode, while an appropriate one could reduce this variance. When pricing options, ?X usually represents the discounted payoff, which is non-negative. Denoting the variance minimizing importance density as g*(x),

g*x=?x fx?z fzdz? ?x fxwill return a zero variance IS estimate. However, the integral in the denominator, which is the normalizing constant, is the value that needs to be estimated, and we are back at our initial problem.

This result may not yield an importance function, but sheds some light on the type of function that should be selected. When selecting the importance function, one should seek to sample proportional to ?x fx. In the case of options pricing, the importance function should then be a function of the discounted payoff multiplied by its probability density.

The variance of ?X fX gX with respect to the density gx is given by Varg?X fX gX=Eg?X fX gX2-Eg?X fX gX2 = ?2x f2x g2(x)gx dx- ?2= ?2x fx g(x)f(x) dx- ?2=Ef?2x fx g(x)-?2While the variance of ?X with respect to the density fX is given by

Varf?X =Ef?2X -Ef?X 2=Ef?2X -?2From the two equations above, it is clear that in order to reduce the variance of our original estimator, fx g(x)?1. However, as noted in page 202 of 13, it is possible for Varg?X fX gX to become infinite even if the above condition is fulfilled.

Taking

Varg?X fX gX- Varf?X = Ef?2x fx gx-Ef?2X there would be a reduction in variance when selecting the importance distribution g(x) such that

if ?2xfx is small, fx;gxif ?2xfx is large, fx;g(x) Importance Sampling for Barrier Options

From the results in section 2.4.1, it is clear that despite the ability to estimate the price of barrier options, the estimate suffers from high variability, especially when the probability of being knocked out is high. This is due to the paths that get knocked out making the average payoff of the surviving paths large relative to the price.

The variance due to the probability of knockout in section 2.4 can be eliminated if it were possible to use the conditional distribution of the underlying given it does not cross the barrier over its lifetime. However, it is not possible to sample conditional on the underlying surviving. Instead of using the conditional distribution given final survival, the technique used in this paper is the conditional distribution given one-step survival, so it is not knocked out. This idea was motivated by 8. While conditioning on one-step survival allows all simulated paths to avoid being knocked out, it is still necessary for us to multiply the final estimate by a likelihood ratio to produce an unbiased estimate.

From equation (10), in order to sample Sn+1 unconditionally, we have

Sn+1= Snexpr-12?2?t+??t ?-1(U)Where ?x= -?x12?e-y22dy is the standard normal distribution function and ?-1 its inverse function.

By sampling conditional on one-step survival,

U=1-?Sn+ V ?(Sn)where for a lower barrier a and upper barrier b,?Sn=?(Sn+1?(a, b)| Sn=sn)= ?Z?lna Sn-r-12?2?t??t, lnb Sn-r-12?2?t??t| Sn= ?lnb Sn-r-12?2?t??t-?lna Sn-r-12?2?t??t= ?lnSna+r-12?2?t??t since b= ? for the upper barrier in this case.

In the above expression, V and U are uniformly distributed, where U is the maximum value on the condition that the underlying was not knocked out in the previous time step.

This IS procedure has weights

w= i=1mai, bi psisi-1dsiwhere an unbiased estimate of the weights can be used in place of the actual value, as noted in 8. In this importance sampling example, it is not necessary to multiply the terminal payoff by the indicator function since the sampling procedure conditions on one-step survival. Instead, it is multiplied by a likelihood ratio to compensate for the bias induced by avoiding zero payoff paths. The estimate for the down and out barrier option under this scheme is

?IS=e-rTNi=1N(STi-K)+ Lm(i)where

Lm(i) =k=0m-1p(Sk i)Properties of Estimate

The importance sampling estimate is unbiased. In the estimate laid out in section 3.1, Lm = fx gx?1, so

Var ?IS=Ef?2x fx g(x)-?2 ? Ef?2X -?2=Var ?MC which shows that the variance of the importance sampling is less than or equal to the variance of the standard MC estimate. The inequality would be strict if

Ek=1TIa,b(Sk );1 and EST -K+;0that is if there is a non-zero probability of knock-out and positive payoff.

A problem with importance sampling, as noted in numerous literature, is that the weights degenerate as the number of simulations increase. A measure of the variance of the weights, termed the effective sample size (ESS), was first introduced in 16.

ESSn=1i=1Nwn(i)2= i=1Nwn(i)2i=1Nwn(i)2 for all n = 1, …, m. ESS is an indication of the amount of ‘important’ samples. An ESS closer to N is indicative of an IS estimate with a greater amount of ‘important’ samples, and ESS=N indicates that the weights are equally balanced and all N particles are contributing to the estimator. Smaller ESS is indicative of a degeneracy in the weights.

Table 5 shows the ESS for various IS estimates for the down and out barrier options. Indeed, we see that the ESS decreases as m increases, which is a well-known fact 5. Besides ESS, other measures of the variance of weights include coefficient of variation (CV) 16 and the Shannon Entropy (ENT).

CVn= 1Ni=1N(Nwni-1)2=NESSn-1, 0?CVn?N-1ENTn= -i=1Nwnilog2wni 0?ENTn?log2NWhen all the weights are equal, CVn=0 while ENTn= log2N, and the weights are focused on one particle when CVn=N-1 while ENTn= 0. The three measures are representative of the instability in the weights, and can be used interchangeably when setting up the resampling scheme in chapter 4.

Monte Carlo and Importance Sampling – A Comparison

S0K a T m r ?N MC Est IS Est Standard Err ESS

100 110 50 1 100 0.015 0.3 1000 7.683 8.973 0.592 981.90

10,000 8.713 9.119 0.190 9804.73

100 110 90 1 100 0.015 0.3 1000 6.008 6.623 0.5306 350.68

10,000 6.361 6.9220.1683 3762.92

100 110 90 1 252 0.015 0.3 1000 5.296 6.304 0.535 285.65

10,000 6.464 6.462 0.168 3147.2

100 90 85 1 252 0.01 0.2 1000 12.001 12.772 0.536 595.09

10,000 13.092 13.051 0.164 6162.20

Table 4: IS estimate for different parameters, with its standard error and ESS

The IS estimate is unbiased and has a smaller variance than the standard MC estimate in certain cases. However, in the application of barrier options, considering the amount of increased computational power for the IS method, a small reduction in the variance of the estimate may not be a worthwhile trade-off.

m5 10 15 20 25

ESS 39121.96 28369.97 21865.31 17649.91 14594.87

k= S0=50, ?t=0.5, r=0.015, ?=0.4, N=50000, barrier=40,?)mm5 10 15 20 25

ESS 38393.98 27463.73 21156.90 17071.19 14181.7

k= S0=50, ?t=0.5, r=0.015, ?=0.4, N=50000, barrier=45,?)mTable 5: ESS for different barrier values

In the method adopted, the IS estimate reduces the Monte Carlo variance of the estimate by removing the possibility of paths with zero payoff. However, IS fails to deal with the variance of the normalized weights. In table X, the IS was not run for 105 N unlike the standard MC estimate due to computational inefficiencies.

Sequential Monte Carlo Methods

As seen in section 3.3, IS estimates face the issue of weight degeneracy. Utilizing the SMC method entails resampling asset values that are rejected when they hit the barrier, from asset samples that do not breach the barrier.

In the discussion of the paper in 18, Rousset ; Doucet commented that knowing the exact weight is not necessary, an unbiased positive estimate would allow the SMC estimate to be consistent.

We have a measurable space E, ?, and a sequence of probability measures fjj?T, the target distribution, where T=1, …, m. j represents the time step that from discretizing the time interval as in section 2.4. Let each fjdx admit a density fjx, where dx is a ?-finite measure. Let Ej be the support of fj, so suppfj= Ej={x?E: fjx;0}. Generally, the goal of SMC samplers is to sample from fjj?T sequentially, where the distributions are all defined on a common measurable space. That is, sampling N particles from f1, followed by f2, up till fm. The idea is then to sample N particles at each time step xj(i),wj(i) for j = 1, …, m and i = 1, …, N. wj(i) represents the normalized weight of particle i at time step j. The empirical distribution of the N particles converges asymptotically to the target fj.

i=1Nwj(i)?(xj(i))?Efj?Xalmost surely as N?? and for the f -integrable function ?:E?R.

?= Efj?X= E ?x fjx dxThe particles are propagated through time using a Markov kernel Kj:E×??0,1, where Kjx, x’ is the associated density of the kernel. The propagation of the particles is descriptive of the sequential importance sampling setting. SIS still suffers from weight degeneracy, and a way to alleviate this problem is through resampling. The combination of SIS and resampling is also known as SMC samplers. Below is a brief overview of a SMC sampler.

50844453891280Update

Update

time = t

xti, 1Nxti,wtixt+1i,1Nxti,1Nxt+1i,wt+1iPredict

Calculate weights

Resample

Predict

i = 1, …i = 1, …, 10 particles

time = t

xti, 1Nxti,wtixt+1i,1Nxti,1Nxt+1i,wt+1iPredict

Calculate weights

Resample

Predict

i = 1, …i = 1, …, 10 particles

Figure 2: Flowchart of a SMC sampler at time = t, sampling N = 10 particles.

At a given time t, particles are generated based on the transition density. The weights/ likelihood ratios w.r.t. each particle are then evaluated. The particles are then resampled (with replacement) based on the calculated weights, and change the weights of the particles to 1N. The values are then propagated based on the model. Finally, we update the particles. By resampling, the particles are directed towards higher probability masses.

Sequential Importance Sampling

In standard IS, the weights need to be recalculated at each iteration, so the computational demand increases with increasing time steps. Taking this and the weight degeneracy into consideration, one might see a need to consider alternatives to IS.

Sequential importance sampling (SIS) is a sequential version of the classical Bayesian importance sampling. Under IS, the motivation is to approximate the target density using a weighted empirical measure. The motivation for SIS lies in sequentially approximating the target by targeting the bridging distributions, and considers a space of increasing dimensions.

Returning to the problem in section 3 of estimating Ef?X for the target distribution f, with the IS distribution introduced as gX,

gmx1:m= gm-1×1:m-1gmxmx1:m-1= g0x0 g1x1|x0… gmxm| x1:m-1 = g0x0i=1mgi(xi|x1:i-1)If X1:j-1(i)~gj-1×1:j-1 then sampling from the conditional density Xj(i)|X1:j-1i~gjxjx1:j-1i would give X1:j(i)~gj(x1:j). Intuitively, samples are drawn sequentially. Firstly, draw x0 from a proposal distribution g0 that is reasonably close to the target distribution f. Subsequently, sample xj conditional on xj-1 from gjxj|xj-1 for j = 1, …, m.

The unnormalized importance weights are recursively updated according to

wmx1:m= fmx1:mgmx1:m= wm-1×1:m-1fm(x1:m)fm-1×1:m-1gm(xm|x1:m-1) = wm-1×1:m-1 ?mx1:m = wm-1×1:m-1j=1m ?jx1:j

where

?mx1:m= fm(x1:m)fm-1×1:m-1gm(xm|x1:m-1)is the incremental importance weight. With each iteration, ?mx1:m is evaluated, then the importance weight is updated. A computational advantage of this method as opposed to IS lies here since the past trajectories x0:m-2(i)i=1N are not involved in the calculation.

SIS is more computationally efficient than IS, though SIS still faces the problem of weight degeneracy as the time steps increase 16. This is due to the fact that as the time increases, the probability mass would be allocated to one particle, causing that particle’s normalized weight to converge to 1 while the normalized weights of other particles would converge to zero.

Formulation

This section applies the idea in 9 in the example of barrier options.

Let the Markov kernel be gjxjx1:j-1i as in section 3.2. Then

fS1:m? ST-K?j=1mIa,bsjpsjsj-1=?m(S1:m)Zmwhere

Zm=?m(S1:m)ds1:mis the normalizing constant. ??(0, 1) is termed the temperature constant.

The estimate for the down and out barrier call option is therefore

e-rTZTEf ST-K+ST-K?In the approximation of barrier option prices, the goal remains the same as in chapter 3, estimating ?= Ef?X= E ?x fx dx.

The option price is estimated using

e-rT(ST-K)+k=1mIa,bstk pstkstk-1 1?m(S1:m)?m(S1:m)ZmZm ds1 …dsm=e-rTZm(ST-K)+i=1mIa,bstk pstkstk-1?m(S1:m)fmS1:mds1 …dsm=Zme-rTEf ST-K+ST-K?The choice importance distribution for SIS is a key to the efficiency of the algorithm as well. Since the weights are updated recursively, the weights will still face the problem of degeneracy as the number of discretized time steps increases. This lays the groundwork for the SMC sampler method that we would use in section 4.2. The SIS method is not applied as it is an intermediate step in the following section, and the method still faces the problem of weight degeneracy.

SMC Samplers

SMC samplers are built upon the foundation of SIS, but with an additional resampling step. This method was introduced in 19 to deal with the weight degeneracy problem. With each increase in the time step, the variance of the unnormalized weights wnX1:n(i) tend to increase, and all the mass is focused on a few random particles.

The goal of SMC is to reset the approximation by getting rid of particles with low normalized importance weights and resampling from particles with higher weights. This prevents the weights from permanently degenerating as is the case for SIS. 19 proposed resampling at every time step. However, it has been shown that this introduces additional variance to the algorithm 5.

This paper will explore resampling based on a dynamic schedule. Resampling at every time step introduces additional Monte Carlo variation, and is neither necessary nor computationally efficient. Resampling based on a dynamic schedule ensures that the instability in the weights does not fall below a threshold level whereby it causes the variance of the estimate to increase. See 21 for several such resampling schemes, and see 23 for a comparison of the efficiency of some resampling schemes, in terms of the Monte Carlo variation. The algorithm presented in 4.2.2 satisfies the condition

ENj#ix0:ji= Nwj(i)where Nj#i is the number of duplicates of particle i at time j. Consequently, this condition allows the particle filter to be consistent and asymptotically normal.

Formulation

Applying the SMC algorithm to the method in 3.2, the weights at each time step is recursively calculated using

wj(i)= wj-1ip(Sj)before calculating the ESS of the time step. When it falls below the threshold, the particles are samples with replacement based on their normalized probabilities. Doing so enables the condition in the previous section to be fulfilled. The weights are then set to 1N when resampling occurs. The complexity of the code O(n). The simulation of the expected option price using SMC is less complex than IS due to the ability to recursively calculate the weights.

Algorithm

The algorithms for the SMC sampler that resamples at every step and the sampler that resamples based on a dynamic schedule are similar, though the dynamic schedule includes an if-else condition based on the pre-set threshold for the ESS.

Generic SMC Algorithm with Selective Resampling

At time k = 0, For i = 1 to N

Draw x0(i)~q0(x0) and set w0(i)=? x0(i)q0x0(i)For k = 1 to T-1

1: Draw xk(i)~qk(xk|xk-1i )

2: Evaluate importance weights:

wk*(i)?wk-1*(i)? yk| xk(i)? xk(i)|xk-1(i)qkxk(i)| x1:k-1i, yk3: Normalize the importance weights and compute ESS

wk(i)= wk*(i)j=1Nwk*(j) and ESS= 1i=1Nwk(i)24: If ESS ; threshold

then take x0:ki?(x0:k-1i, xki) and set wk(i)= wk(i) Else resample N particles with replacement using probabilities

wk(i)i=1N and set wk(i)=1N

SMC Sampler for Barriers

Since the IS distribution was conditional on one-step survival, improvements in the variance of the weights is likely to produce greater decreases in the MC variance. Moreover, the goal of resampling was to deal with the weight degeneracy. The estimates this section aims to evaluate its efficacy, while taking into consideration the computational requirements of the method.

Figure 3: Variance of weights for parameters used in Table 5, with lower barrier 45.

The combination of SIS and resampling on the method in 3.2 has reaped computational benefits in our estimate. The estimate in Figure 3 was ran 20 times independently, with N=50×105 particles for every run. The mean CPU time taken for the 20 runs was ?0.349s. For the same parameters and number of runs, performing resampling at every step is marginally slower at ?0.433s.

Figure 4: m=252, with other parameters the same as in Figure 3

The number of time steps m was increased to 252 in figure 4 from 25 in figure 3. For 5 independent runs for the parameters in figure 4, the mean time for the runs was ~15.6s, showing that the computational efficiency decreases as the number of time interval increases.

Papers involving the theoretical properties of particle filters can be found in Chapters 2 and 3 of 5. Due to the condition in 4.2, particle filters are (1) consistent and (2) asymptotically normal. 23 gives a treatment of some resampling schemes and reviews some theoretical properties of the schemes as well. 1, 10 also show that the estimate for the SMC method is unbiased.

Conclusion and Future Directions

This paper has served to consider Monte Carlo methods in options pricing and to convince ourselves of the accuracy of these numerical methods. In looking at standard Monte Carlo methods, the variance in the estimate prompted us to look at importance sampling which has proven to be a conceptually attractive idea.

Yet when applied to the case of barrier options by sampling conditional on survival of the option, it was shown that the trade-off was not worthwhile, and the weights degenerate with an increase in time. This led to the exploration of SMC methods, in particular resampling schemes with a pre-specified threshold, that has proven to be computationally efficient, and capable of dealing with the weight degeneracy. While SMC methods have been proved to be beneficial in the applications used, there is no guarantee that they are better than other Monte Carlo methods. The applications of SMC methods to finance is limited, though it may be an interesting avenue to be explored for the pricing of financial instruments.

Due to time constraints, the application of SMC methods to other forms of options was not explored in this paper, though 3 explored its application in Asian options.

References

BIBLIOGRAPHY Glasserman, P. (2003). Monte Carlo Methods in Financial Engineering. Columbia: Springer.

Broadie, M., Glasserman, P., ; Kou, S. G. (1997). A continuity correction for discrete barrier options. Math. Finance , 325-348.

Jasra, A., ; Del Moral, P. (2011). Sequential Monte Carlo Methods for Option Pricing. Stochastic Analysis ; Applications (29), 292-316.

C?nlar, E. (2011). Probability and Stochastics. NY: Springer.

Doucet, A., Freitas, N., ; Gordon, N. (2001). Sequential Monte Carlo Methods in Practice. NY: Springer.

Barndorff-Nielsen, O. E., ; Shephard, N. (2001). Non-Gaussian OU based models and some of their uses in financial economics. Journal of the Royal Statistical Society , 63, 167-241.

Boyle, P. P. (1977). Options: A Monte Carlo Approach. Journal of Financial Economics , 323-338.

Glasserman, P., ; Staum, J. (2001). Conditioning on One-Step Survival for Barrier Option Simulations. Operations Research , 49, 923-937.

Jasra, A., ; Doucet, A. (2009). Sequential Monte Carlo Methods for Diffusion Processes. Proceedings: Mathematical, Physical and Engineering Sciences , 465 (2112), 3709-3727.

Creal, D. (2012). A Survey of Sequential Monte Carlo Methods for Economics and Finance. Econometric Reviews , 31 (3), 245-296.

Robert, C. P., ; Casella, G. (2004). Monte Carlo Statistical Methods (2nd ed.). New York: Springer.

Hull, J. C. (2018). Options, Futures and Other Derivatives (9th ed.). Toronto: Pearson.

Ross, S. M. (2013). Simulation (5th ed.). California: Elsevier.

Gobet, E. (2009). Advanced Monte Carlo methods for barrier and related exotic options. Handbook of Numerical Analysis , 497-528.

Boyle, P., Broadie, M., ; Glasserman, P. (1997). Monte Carlo Methods for Security Pricing. Journal of Economic Dynamics and Control 21 , 1267-1321.

Kong, A., Liu, J. S., ; Wong, W. (1994). Sequential Imputations and Bayesian missing data problems. Journal of the American Statistical Association (89), 278-288.

Doucet, A., Briers, M., ; Sénécal, S. (2006). Efficient Block Sampling Strategies for Sequential Monte Carlo Methods. Journal of Computational and Graphical Statistics , 15 (3), 693-711.

Beskos, A., Papaspiliopoulos, O., Roberts, G. O., ; Fearnhead, P. (2006). Exact and computationally efficient likelihood-based estimation for discretely observed diffusion processes. J. R. Statist. Soc. B , 333-382.

Gordon, N. J., Salmond, D. J., ; Smith, A. F. (1993). Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEE Proceedings Part F: Radar and Signal Processing , 140 (2), 107–113.

Del Moral, P. (1996). Nonlinear Filtering: Interacting Particle Resolution. Markov Processes and Related Fields , 2 (4), 555-580.

Liu, J. S., ; Chen, R. (1998). Sequential Monte Carlo Methods for Dynamic Systems. Journal of the American Statistical Association , 93 (445), 1032-1044.

Del Moral, P., Doucet, A., ; Jasra, A. (2006). Sequential Monte Carlo Samplers. J. R. Statist. Soc. , 68 (3), 411-436.

Douc, R., Cappe, O., ; Moulines, E. (2005). Comparison of Resampling Schemes for Particle Filtering. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis (pp. 64-69). Croatia: ISPA.

Kailath, T. (1971). The Structure of Radon-Nikodym Derivatives with Respect to Wiener and Related Measures. The Annals of Mathematical Statistics , 42 (3), 1054-1067.