Week 5 lecture 1

24
1 Monte Carlo Simulation

Transcript of Week 5 lecture 1

Page 1: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 1/24

1

Monte Carlo Simulation

Page 2: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 2/24

2

Monte Carlo Simulation and Options

When used to value European stock options, Monte

Carlo simulation involves the following steps:

1. Simulate 1 path for the stock price in a risk neutral

world

2. Calculate the payoff from the stock option

3. Repeat steps 1 and 2 many times to get many sample

payoff 

4. Calculate mean payoff 

5. Discount mean payoff at risk free rate to get an

estimate of the value of the option

Page 3: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 3/24

3

Sampling Stock Price Movements� In a risk neutral world the process for a stock

price is

� We can simulate a path by choosing timesteps of length (t  and using the discrete

version of this

where I is a random sample from J(0,1)

t S t S S  (IW(Q!(  Ö

d S S  d t S  dz ! Q W 

Page 4: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 4/24

4

A More Accurate Approach

t t et S t t S 

t t t S t t S 

d z dt S d 

(I(Q

!(

(I(Q!(

Q!

 

or 

 

is this of  version discrete The

 Use

2/Ö

2

2

2

)()(

2/Ö)(ln)(ln

2/Öln

Page 5: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 5/24

5

Sampling from Normal

Distribution� One simple way to obtain a sample

from J(0,1) is to generate 12 random

numbers between 0.0 & 1.0, take thesum, and subtract 6.0

� In Excel =NORMSINV(RAND()) gives

a random sample from J(0,1)� In Matlab: µrandn¶ generates random

samples from J(0,1)

Page 6: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 6/24

6

Standard Errors

� When N is sufficiently large, the priceestimate has a normal distribution with the

following parameters:

± Mean: the true price of the contract

± Standard deviation: with [ the standard

deviation of the discounted payoffs and N  the

number of simulated paths

� is called the ³standard error´ of the

estimated priceN

N

Page 7: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 7/24

7

Confidence Intervals

� The standard error of the estimate of the

option price is the standard deviation of 

th

e discounted payoffs given by th

esimulation trials divided by the square root

of the number of observations.

� Remember: the estimate for the price is

the sample mean of the sample of generated prices

Page 8: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 8/24

8

Extension

When a derivative depends on several

underlying variables we can simulate

paths for eac

hof t

hem in a risk-neutralworld to calculate the values for the

derivative

Page 9: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 9/24

9

To Obtain 2 Correlated Normal

Samples� Obtain independent (uncorrelated) normal

samples x1 and x2

� We get two series I and I2

with correlation

as follows:

2

212

11

1 VV!I!I

xx

x

Page 10: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 10/24

10

Cholesky Decomposition� Obtain n independent (uncorrelated) normal

samples x1«xn

� We want to obtain n series I and In with 

correlation matrix :

� We first calculate the Cholesky decomposition A

of V:

¼¼¼¼

½

»

¬¬¬¬«

!

1

11

1,1

,1

21

112

nnn

nn

n

VV

V

VVV

V

.

1/

/

.

AA

aa

a

a

AT 

nnn

!

¼¼¼¼

½

»

¬¬¬¬«

! such that0

00

1

22

11

..

1/

/1/

.

Page 11: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 11/24

11

Cholesky Decomposition (cont¶d)

� The n series I and In with correlation matrix

are then obtained as:

� Remark: A only exists if V is indeed acorrelation matrix

¹¹¹

º

¸

©©©

ª

¨

!¹¹¹

º

¸

©©©

ª

¨

nn x

x

//

11

Page 12: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 12/24

12

Cholesky Decomposition (cont¶d)

Special case

� If V is a 2x2 matrix:

� Then

¼½»¬«!

1121

12

V

VV

¼½»¬« ! 21

01VV

A

Page 13: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 13/24

13

Application of Monte Carlo Simulation

� Monte Carlo simulation can deal with path dependent options, and optionswith complex payoffs.

� It can easily be used to price optionsdependent on several underlyingstate variables.

BUT

� It cannot easily deal with American-style options.

Page 14: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 14/24

14

Determining Greek Letters

For (

1.Make a small change to asset price

2.Carry out the simulation again using the samerandom number streams

3.Estimate ( as the change in the option price

divided by the change in the asset price

Proceed in a similar manner for other Greek letters

Page 15: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 15/24

15

1. Antithetic variable technique

2. Control variate technique

3. Importance sampling4. Stratified sampling

5. Moment matching

Variance Reduction Techniques

Page 16: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 16/24

16

1. Antithetic variable technique

� Given the random series I1«In used togenerate one discounted payoff  f  1, we use the

series - I1«-In to generate a second

discounted payoff  f  2.

� The average of the two discounted payoffs isused a single estimate f:

� The number of independent estimates N  is

given by the number of 

� The standard error is given by

2

21 f  f  f  

!

sf  '

N

Page 17: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 17/24

17

� Why does it work ?

� Each time we have drawn a random sample of I¶s that is unusually high, the series ±I is

unusually low, and vice versa.

� So the two series compensate each other.

Page 18: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 18/24

18

2. Control variate technique

� Suppose we want to obtain the price f  A of aderivative A.

� Assume that there is another derivative B similar to the A, but of which we have an analytic

expression for the price.

� We generate price estimates f*A and f*B usingthe same I¶s.

� The price estimate f  A for A is given by:

where f  B is the known true price of B calculatedanalytically.

**

BBAA f  f  f  f   !

Page 19: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 19/24

19

� Why does it work ?

� From the following expression for f  A:

� One see that this technique adds the term to the

simulated price f*A for the derivative A.

� is the difference between the simulated price for 

derivative B, and its known true price. It picks up, and

corrects for, any overestimation or underestimation.

*

BBf  f  

*

BB f  f  

**

BBAAf  f  f  f   !

Page 20: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 20/24

20

3. Moment matching

� For each path we store ale the Ii¶s.

� We calculate the mean m, and the standard

deviation W of the sample of Ii¶s.

� We define a new series of Ii¶s as:

� This way, the mean of series of Ii¶s used togenerate the path is exactly zero, and its

standard deviation is exactly one.

I I 

mi

i

!*

Page 21: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 21/24

21

4. Stratified sampling

�W

e typically can generate a series of Ii¶s asfollows:

� With the ui¶s drawn from a uniform random

distribution on (0,1).� For any sample of M Ii¶s, the u

i¶s will never be

distributed perfectly uniform.

� We can achieve this by generating the Ii¶s as:

)(1

iiu

*!I 

¹º

¸©ª

¨ *!

i

i

5.01I 

Page 22: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 22/24

Page 23: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 23/24

2

3

5. Quasi-Random Sequences

� With stratified sampling, we need to determine the

number M of Ii¶s at the start, and stick to it.

� If we set M=100,000, but only use the first 90,000 ui¶s,

we will be missing all the 10% biggest values.

� Quasi-random sequences are series of ui¶s that are

spread uniformly over (0,1), just as with stratified

sampling.

� But extra values for the ui¶ are always set such that they

fill in the gaps left between the previous values.

� Quasi-random sequences are generated using

equations. There aren¶t random at all. They just appear 

to be so.

Page 24: Week 5 lecture 1

8/7/2019 Week 5 lecture 1

http://slidepdf.com/reader/full/week-5-lecture-1 24/24

2

4

Example of quasi-random numbers: The Sobel

sequence in two dimensions