S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The...

42
STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015

Transcript of S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The...

Page 1: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

STOCHASTIC MODELSLECTURE 1

MARKOV CHAINSNan Chen

MSc Program in Financial EngineeringThe Chinese University of Hong Kong

(ShenZhen)Sept. 9, 2015

Page 2: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Outline1. Introduction2. Chapman-Kolmogrov

Equations3. Classification of States4. The Gambler’s ruin Problem

Page 3: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

1.1 INTRODUCTION

Page 4: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

What is a Stochastic Process?

• A stochastic process is a collection of random variables that are indexed by time. Usually, we denote it by – or –

• Examples: – Daily average temperature on CUHK-SZ campus– Real-time stock price of Google

Page 5: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Motivation of Markov Chains

• Stochastic processes are widely used to characterize the temporal relationship between random variables.

• The simplest model should be that are independent of each other.

• But, the above model may not be able to provide a reasonable approximation to financial markets.

Page 6: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

What is a Markov Chain?

• Let be a stochastic process that takes on a finite/countable number of possible states. We call it by a Markov chain, if the conditional distribution of depends on the past observations only through . Namely,

for all

Page 7: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

The Markovian Property

• It can be shown that the definition of Markov chains is equivalent to stating that

• In words, given the current state of the process, its future and historical movements are independent.

Page 8: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Financial Relevance: Efficient Market Hypothesis

• The Markovian property turns out to be highly relevant to financial modeling in light of one of the most profound theory in the history of modern finance --- efficient market hypothesis.

• It states – Market information, such as the information

reflected in the past record or the information published in financial press, must be absorbed and reflected quickly in the stock price.

Page 9: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

More about EMH: a Thought Experiment• Let us start with the following thought

experiment: Assume that Prof. Chen had invented an formula

which we could use to predict the movements of Google stock price very accurately.

What would happen if this formula was unveiled to the public?

Page 10: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

More about EMH: a Thought Experiment • Suppose that it predicted that Google’s stock price

would rise dramatically in three days to US$700 from US$650. – The prediction would induce a great wave of

immediate buy orders. – Huge demands on Google’s stocks would push its price

to jump to $700 immediately.– The formula fails!

• A true story of Edward Thorp and the Black-Scholes formula

Page 11: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Implication of Efficient Market Hypothesis

• One implication of EMH is that given the current stock price, knowing its history will help very little in predicting its future.

• Therefore, we should use Markov processes to model the dynamic of financial variables.

Page 12: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Transition Matrix

• In this lecture, we only consider time-homogenous Markov chains; that is, the transition probabilities are independent of time

• Denote We then can use the following matrix to characterize the process.

Page 13: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Transition Matrix

• The transition matrix of a Markov chain must be a stochastic matrix: – –

Page 14: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Example I: Forecasting the Weather

• Suppose that the chance of rain tomorrow in Shenzhen depends on previous weather conditions only through whether or not it is raining today.

• Assume that if it rains today, then it will rain tomorrow with probability 70%; and if it does not rain today, then it will rain tomorrow with prob. 50%.

• How can we use a Markov chain to model it?

Page 15: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Example II: 1-dimensional Random Walk• A Markov chain whose state space is given by

the integers is said to be a random walk if, for some number

• We say the random walk is symmetric if ; asymmetric if

Page 16: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

1.2 CHAPMAN-KOLMOGOROV EQUATIONS

Page 17: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

The Chapman-Kolmogorov Equations

• The CK equations provide a method for computing the -step transition probabilities of a Markov chain.

• or in a matrix form,

Page 18: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Example III: Rain Probability

• Reconsider the situation in Example I. Given that it is raining today, what is the probability that it will rain four days from today?

Page 19: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Example IV: Urn and Balls

• An urn always contains 2 balls. Ball colors are red and blue. At each stage a ball is randomly chosen and then replaced by a new ball, which with prob. 80% is the same color, and with prob. 20% is the opposite color.

• If initially both balls are red, find the probability that the fifth ball selected is red.

Page 20: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

1.3 STATE CLASSIFICATION

Page 21: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Asymptotic Behavior of Markov Chains• It is frequently of interest to find the

asymptotic behavior of as• One may expect that the influence of the

initial state recedes in time and that consequently, as approaches a limit which is independent of

• In order to analyze precisely this issue, we need to introduce some principles of classifying states of a Markov chain.

Page 22: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Example V

• Consider a Markov chain consisting of the 4 states 0, 1, 2, 3 and having transition probability matrix

• What is the most improbable state after 1,000 steps by your intuition?

Page 23: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Accessibility and Communication

• State is said to be accessible from state if for some , – In the previous slide, state 3 is accessible from

state 2.– But, state 2 is not accessible from state 3.

• Two states and are said to communicate if they are accessible to each other. We write

– States 0 and 1 communicate in the previous

example.

Page 24: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Simple Properties of Communication

• The relation of communication satisfies the following three properties: – State communicates with itself;– If state communicates with state , then state

communicates with state

– If state communicates with state , and state communicates with state , then state communicates with state

Page 25: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

State Classes• Two states that communicate are said to be

in the same class. • It is an easy consequence of the three

properties in the last slide that any two classes are either identical or disjoint. In other words, the concept of communication divides the state space into a number of separate classes.

• In the previous example, we have three classes:

Page 26: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Example VI: Irreducible Markov Chain

• Consider the Markov chain consisting of the three states 0, 1, 2, and having transition probability matrix

How many classes does it contain?• The Markov chain is said to be irreducible if

there is only one class.

Page 27: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Recurrence and Transience

• Consider an arbitrary state in a generic Markov chain. Define

In other words, represents the probability that, starting from state , the first return to state occurs at the nth step.

• Let

Page 28: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Recurrence and Transience (Continued)• We say a state is recurrent if That is to

say, a state is recurrent if and only if, starting from this state, the probability of returning to it after some finite length of time is 100%.

• It is easy to argue, that if a state is recurrent, then, starting from this state, the Markov chain will return to it again, and again, and again --- in fact, infinitely often.

Page 29: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Recurrence and Transience (Continued)

• A non-recurrent state is said to be transient, i.e., a transient state satisfies

• Starting from a transient state , – The process will never again revisit the state with

a positive probability– The process will revisit the state just once with a

probability– The process will revisit the state just twice with a

probability– ……

Page 30: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Recurrence and Transience (Continued)

• From the above two definitions, we can easily see the following two conclusions:– A transient state will only be visited a finite

number of times. – In a finite-state Markov chain not all states can be

transient. • In Example V, states 0, 1, 3 are recurrent, and

state 2 is transient.

Page 31: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

One Commonly Used Criterion of Recurrence

• Theorem: A state is recurrent if and only if

• You may refer to Example 4.18 in Ross to see one application of this criterion to prove that one-dimensional symmetric random walk is recurrent.

Page 32: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Recurrence as a Class Property

• Theorem: If state is recurrent, and state communicates with state , then state is recurrent.

• Two conclusions can be drawn from the theorem: – Transience is also a class property.– All states of a finite irreducible Markov chain are

recurrent.

Page 33: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Example VII

• Let the Markov chain consisting of the states 0, 1, 2, 3, and having transition probability matrix

Determine which states are transient and which are recurrent.

Page 34: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Example VIII

• Discuss the recurrent property of a one-dimensional random walk.

• Conclusion:– Symmetric random walk is recurrent;– Asymmetric random walk is not.

Page 35: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

1.4 THE GAMBLER’S RUIN PROBLEM

Page 36: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

The Gambler’s Ruin Problem

• Consider a gambler who at each play of the game has probability of winning one dollar and probability of losing one dollar. Assuming that successive plays of the game are independent, what is the probability that, starting with dollars, the gambler fortune will win dollars before he ruins (i.e., his fortune reaches 0)?

Page 37: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Markov Description of the Model

• If we let denote the player’s fortune at time , then the process is a Markov chain with transition probabilities– – ,

• The Markov chain has three classes:

Page 38: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Solution

• Let be the probability that, starting with dollars, the gambler fortune will eventually reach .

• By conditioning on the outcome of the initial play

, and

Page 39: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Solution (Continued)

• Hence, we obtain from the preceding slide that – –

– ……–

Page 40: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Solution (Continued)

• Adding all the equalities up, we obtain –

Page 41: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Solution (Continued)

• Note that, as

• Thus, if , there is a positive probability that the gambler’s fortune will increase indefinitely; while if , the gambler will, with probability 1, go ruin against an infinitely rich adversary (say, a casino).

Page 42: S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Homework Assignments

• Read Ross Chapter 4.1, 4.2, 4.3, and 4.5 (you may ignore 4.5.3).

• Answer Questions:– Exercises 2, 3, 5, 6 (Page 261, Ross)– Exercises 13, 14 (Page 262, Ross)– Exercises 56, 57, 58 (Page 270, Ross)– (Optional, Extra Bonus) Exercise 59 (page 270,

Ross).