Hidden Markov Model
description
Transcript of Hidden Markov Model
![Page 1: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/1.jpg)
1
Hidden Markov Model
Observation : O1,O2, . . .
States in time : q1, q2, . . .
All states : s1, s2, . . ., sN
tOOOO ,,,, 321
tqqqq ,,,, 321
Si Sjjiaija
![Page 2: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/2.jpg)
2
Hidden Markov Model (Cont’d)
Discrete Markov Model
)|(
),,,|(
1
121
itjt
zktitjt
sqsqP
sqsqsqsqP
Degree 1 Markov Model
![Page 3: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/3.jpg)
3
Hidden Markov Model (Cont’d)
)|( 1 itjtij sqsqPa
ija : Transition Probability from Si to Sj ,
Nji ,1
![Page 4: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/4.jpg)
4
Discrete Markov Model Example
S1 : The weather is rainyS2 : The weather is cloudyS3 : The weather is sunny
8.01.01.02.06.02.03.03.04.0
}{ ijaA
rainy cloudy sunnyrainycloudy
sunny
![Page 5: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/5.jpg)
5
Hidden Markov Model Example (Cont’d)
Question 1:How much is this probability:Sunny-Sunny-Sunny-Rainy-Rainy-Sunny-Cloudy-Cloudy
22311333 ssssssss
22321311313333 aaaaaaa
87654321 qqqqqqqq410536.1
![Page 6: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/6.jpg)
6
Hidden Markov Model Example (Cont’d)
Question 2:The probability of staying in state Si for d days if we are in state Si?
NisqP ii 1),( 1The probability of being in state i in time t=1
)()1()( 1 dPaassssP iiidiiijiii
d Days
![Page 7: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/7.jpg)
7
Discrete Density HMM Components
N : Number Of StatesM : Number Of OutputsA (NxN) : State Transition Probability MatrixB (NxM): Output Occurrence Probability in each state (1xN): Initial State Probability
),,( BA : Set of HMM Parameters
![Page 8: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/8.jpg)
8
Three Basic HMM ProblemsRecognition Problem:
Given an HMM and a sequence of observations O,what is the probability ? State Decoding Problem:
Given a model and a sequence of observations O, what is the most likely state sequence in the model that produced the observations?Training Problem:
Given a model and a sequence of observations O, how should we adjust model parameters in order to maximize ?
)|( OP
)|( OP
![Page 9: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/9.jpg)
9
First Problem Solution
)(),|(),|(11 tq
T
ttt
T
tObqOPqOP
t
TT qqqqqqq aaaqP132211
)|(
)()|(),( yPyxPyxP )|(),|()|,( zyPzyxPzyxP
We Know That:
And
![Page 10: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/10.jpg)
10
First Problem Solution (Cont’d)
)|(),|()|,( qPqOPqOP
)()()()|,(
122111 21 Tqqqqqqqq ObaObaObqOP
TTT
T
TTTqqq
Tqqqqqqqq
q
ObaObaOb
qOPOP
21
122111)()()(
)|,()|(
21
Computation Order : )2( TTNO
![Page 11: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/11.jpg)
11
Forward Backward Approach
)|,,,,()( 21 iqOOOPi ttt
NiObi ii 1),()( 11
Computing )(it
1) Initialization
![Page 12: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/12.jpg)
12
Forward Backward Approach (Cont’d)
NjTt
Obaij tjij
N
itt
1,11
)(])([)( 11
1 2) Induction :
3) Termination :
N
iT iOP
1
)()|(
Computation Order : )( 2TNO
![Page 13: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/13.jpg)
13
Backward Variable
),|,,,()( 21 iqOOOPi tTttt
NiiT 1,1)(1) Initialization
2)Induction
NiTTt
jObaiN
jttjijt
1 and 1,,2,1
)()()(1
11
![Page 14: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/14.jpg)
14
Second Problem SolutionFinding the most likely state sequence
N
itt
ttN
it
t
ttt
ii
ii
iqOP
iqOPOP
iqOPOiqPi
11
)()(
)()(
)|,(
)|,()|(
)|,(),|()(
Individually most likely state :Ttiq t
it 1)],([maxarg*
![Page 15: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/15.jpg)
15
Viterbi Algorithm
Define :
Ni
OOOiqqqqP
i
tttqqq
t
t
1
]|,,,,,,,,[max
)(
21121,,, 121
P is the most likely state sequence with this conditions : state i , time t and observation o
![Page 16: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/16.jpg)
16
Viterbi Algorithm (Cont’d)
)(].)(max[)( 11 tjijtit Obaij 1) Initialization
0)(1),()(
1
11
iNiObi ii
)(it Is the most likely state before state i at time t-1
![Page 17: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/17.jpg)
17
Viterbi Algorithm (Cont’d)
NjTt
aij
Obaij
ijtNi
t
tjijtNit
1,2
])([maxarg)(
)(])([max)(
11
11
2) Recursion
![Page 18: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/18.jpg)
18
Viterbi Algorithm (Cont’d)
)]([maxarg
)]([max
1
*
1
*
iq
ip
TNi
T
TNi
3) Termination:
4)Backtracking:
1,,2,1),( *11
* TTtqq ttt
![Page 19: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/19.jpg)
19
Third Problem SolutionParameters Estimation using Baum-Welch Or Expectation Maximization (EM) Approach
Define:
N
i
N
jttjijt
ttjijt
tt
ttt
jObai
jObaiOP
jqiqOPOjqiqPji
1 111
11
1
1
)()()(
)()()()|(
)|,,(),|,(),(
![Page 20: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/20.jpg)
20
Third Problem Solution (Cont’d)
N
jtt jii
1
),()(
1
1
)(T
tt i
T
tt ji
1
),(
: Expected value of the number of jumps from state i
: Expected value of the number of jumps from state i to state j
![Page 21: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/21.jpg)
21
Third Problem Solution (Cont’d)
)(1 ii
1
1
1
)(
),(
T
tt
T
tt
ij
i
jia
T
tt
Vo
T
tt
j
j
j
kb kt
1
1
)(
)(
)(
![Page 22: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/22.jpg)
22
Baum Auxiliary Function
q
qOPqOPQ )|,(log)'|,()|( '
)'|()|()|()|(: '
OPOPQQif
By this approach we will reach to a local optimum
![Page 23: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/23.jpg)
23
Restrictions Of Reestimation Formulas
11
N
ii
NiaN
jij
1,11
NjkbM
kj
1,1)(1
![Page 24: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/24.jpg)
24
Continuous Observation Density
We have amounts of a PDF instead of
We have
)|()( jqVOPkb tktj
1)(,),,()(1
ttj
M
kjkjktjktj dOObOCOb
Mixture Coefficients
Average Variance
![Page 25: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/25.jpg)
25
Continuous Observation Density
Mixture in HMM
),,()( jkjktjkktj OCMaxOb
M2|1M1|1
M4|1M3|1
M2|3M1|3
M4|3M3|3
M2|2M1|2
M4|2M3|2
S1 S2 S3Dominant Mixture:
![Page 26: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/26.jpg)
26
Continuous Observation Density (Cont’d)
Model Parameters:
),,,,( CA
N×N N×M×K×KN×M×KN×M1×N
N : Number Of StatesM : Number Of Mixtures In Each StateK : Dimension Of Observation Vector
![Page 27: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/27.jpg)
27
Continuous Observation Density (Cont’d)
T
t
M
kt
T
tt
jk
kj
kjC
1 1
1
),(
),(
T
tt
t
T
tt
jkkj
okj
1
1
),(
),(
![Page 28: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/28.jpg)
28
Continuous Observation Density (Cont’d)
T
tt
jktjkt
T
tt
jk
kj
ookj
1
1
),(
)()(),(
),( kjt Probability of event j’th state and k’th mixture at time t
![Page 29: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/29.jpg)
29
State Duration Modeling
Si Sj
Probability of staying d times in state i :
)1()( 1ii
diii aadP
jia
ija
![Page 30: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/30.jpg)
30
State Duration Modeling (Cont’d)
Si Sjjia
……. …….
HMM With clear duration
ija )(dPj)(dPi
![Page 31: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/31.jpg)
31
State Duration Modeling (Cont’d)
HMM consideration with State Duration :– Selecting using ‘s– Selecting using– Selecting Observation Sequence
using in practice we assume the following
independence:
– Selecting next state using transition probabilities . We also have an additional constraint:
),(),,,(1
1
11 121 tq
d
tdq OtbOOOb
iiq 1
dOOO ,,, 21 )(
1dPq1d
21qqa
),,,(11 21 dq OOOb
jq 2
011qqa
![Page 32: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/32.jpg)
32
Training In HMM
Maximum Likelihood (ML)
Maximum Mutual Information (MMI)
Minimum Discrimination Information (MDI)
![Page 33: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/33.jpg)
33
Training In HMM
Maximum Likelihood (ML)
)|( 1oP)|( 2oP)|( 3oP
)|( noP
.
.
.
)]|([*V
rOPMaximumP
ObservationSequence
![Page 34: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/34.jpg)
34
Training In HMM (Cont’d)
Maximum Mutual Information (MMI)
)()()|,(log),(
POPOPOI
v
ww
v
wPwOP
OPOI
1
)(),|(log
)|(log),(
Mutual Information
}{, v
![Page 35: Hidden Markov Model](https://reader036.fdocuments.us/reader036/viewer/2022062305/56816207550346895dd2305c/html5/thumbnails/35.jpg)
35
Training In HMM (Cont’d)Minimum Discrimination Information (MDI)
dooPoqoqPQI )|(
)(log)():(
),,,( 21 TOOOO
),,,( 21 tRRRR
Observation :
Auto correlation :
):(inf),( PQIPR )(RQ