Moderate Deviations for Joint Source-Channel Coding … · 2014-07-02 · Moderate Deviations for...
Transcript of Moderate Deviations for Joint Source-Channel Coding … · 2014-07-02 · Moderate Deviations for...
Moderate Deviations for Joint Source-ChannelCoding of Systems With Markovian Memory
Vincent Y. F. Tan
Department of Electrical and Computer Engineering,Department of Mathematics,
National University of Singapore
Joint work with Shun Watanabe and Masahito Hayashi
July 2, 2014
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 1 / 17
Joint Source Channel Coding
- - - -sk xn yn
f Wn ϕsk
Consider the fundamental limits of JSCC
Consider Markov sources and additive Markov channels as welldiscrete memoryless channels
Bandwidth expansion ratio rn satisfies the moderate deviations,i.e.,
rn =kn
=C(W)
H(PS)− εn
whereεn → 0, and nε2
n →∞.
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 2 / 17
Joint Source Channel Coding
- - - -sk xn yn
f Wn ϕsk
Consider the fundamental limits of JSCC
Consider Markov sources and additive Markov channels as welldiscrete memoryless channels
Bandwidth expansion ratio rn satisfies the moderate deviations,i.e.,
rn =kn
=C(W)
H(PS)− εn
whereεn → 0, and nε2
n →∞.
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 2 / 17
Joint Source Channel Coding
- - - -sk xn yn
f Wn ϕsk
Consider the fundamental limits of JSCC
Consider Markov sources and additive Markov channels as welldiscrete memoryless channels
Bandwidth expansion ratio rn satisfies the moderate deviations,i.e.,
rn =kn
=C(W)
H(PS)− εn
whereεn → 0, and nε2
n →∞.
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 2 / 17
Related work in Joint Source Channel Coding
Error exponents where εn = ε > 0
1 Gallager [Problem 5.16 in book] proposed an exponential upperbound on the probability of error
2 Csiszár (1980) derive bounds on the error exponent which areshown to be tight in some rate regime using the method of types
3 Zhong-Alajaji-Campbell (2006-2007) related the Csiszár approachto the Gallager approach and extended to stationary, ergodicMarkov (SEM) sources and additive SEM channels
Dispersion where εn = a/√
n for some a ∈ R
1 Wang-Ingber-Kochman (2011) derived the dispersion of JSCCusing types
2 Kostina-Verdú (2012) derived non-asymptotic bounds anddispersion of JSCC for more general settings
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 3 / 17
Related work in Joint Source Channel Coding
Error exponents where εn = ε > 0
1 Gallager [Problem 5.16 in book] proposed an exponential upperbound on the probability of error
2 Csiszár (1980) derive bounds on the error exponent which areshown to be tight in some rate regime using the method of types
3 Zhong-Alajaji-Campbell (2006-2007) related the Csiszár approachto the Gallager approach and extended to stationary, ergodicMarkov (SEM) sources and additive SEM channels
Dispersion where εn = a/√
n for some a ∈ R
1 Wang-Ingber-Kochman (2011) derived the dispersion of JSCCusing types
2 Kostina-Verdú (2012) derived non-asymptotic bounds anddispersion of JSCC for more general settings
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 3 / 17
Problem Setup
Finite alphabets X ,Y and S
Source PS = {PSk ∈ P(Sk)}∞k=1
Channel W = {Wn : X n → Yn}∞n=1
Bandwidth expansion ratio rn = k/n
Encoder fk : Sk → X n and decoder ϕn : Yn → Sk
Error probability
e(fk, ϕn) :=∑s∈Sk
PSk (s)Wn(Yn \ ϕ−1n (s) | fk(s)).
Minimum error probability
e(f ∗k , ϕ∗n) = min
fk,ϕne(fk, ϕn).
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 4 / 17
Problem Setup
Finite alphabets X ,Y and S
Source PS = {PSk ∈ P(Sk)}∞k=1
Channel W = {Wn : X n → Yn}∞n=1
Bandwidth expansion ratio rn = k/n
Encoder fk : Sk → X n and decoder ϕn : Yn → Sk
Error probability
e(fk, ϕn) :=∑s∈Sk
PSk (s)Wn(Yn \ ϕ−1n (s) | fk(s)).
Minimum error probability
e(f ∗k , ϕ∗n) = min
fk,ϕne(fk, ϕn).
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 4 / 17
Channel and Source Models
W is a DMC with unique capacity-achieving input distribution so
Wn(y|x) =n∏
i=1
W(yi|xi).
W is a discrete channel with Markov additive noise and alphabetX = Y = {0, 1, . . . , b− 1} s.t.
Yi = Xi ⊕ Zi (mod b)
Noise PZ represents a stationary, ergodic, Markov (SEM) source
PZn(z) = PZ1(z1)n∏
i=2
ΓZ(zi|zi−1)
PS is a SEM source
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 5 / 17
Channel and Source Models
W is a DMC with unique capacity-achieving input distribution so
Wn(y|x) =n∏
i=1
W(yi|xi).
W is a discrete channel with Markov additive noise and alphabetX = Y = {0, 1, . . . , b− 1} s.t.
Yi = Xi ⊕ Zi (mod b)
Noise PZ represents a stationary, ergodic, Markov (SEM) source
PZn(z) = PZ1(z1)n∏
i=2
ΓZ(zi|zi−1)
PS is a SEM source
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 5 / 17
Information Quantities for the SEM Source
Cumulant generating function of any SEM source PS:
φ(ρ; PS) := log∑s,s′
PS,ρ(s′)ΓS(s|s′)1−ρ.
where PS,ρ(s) is the normalized eigenvector corresponding to thePerron-Frobenius eigenvalue of the tilted transition matrixΓS,ρ(s|s′).
For any SEM source PS, the entropy and varentropy are
H(PS) =∑
s′P(s′)
∑s
ΓS(s|s′) log1
ΓS(s|s′), and
V(PS) = φ′′(0; PS).
where P is the stationary distribution of ΓS
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 6 / 17
Information Quantities for the SEM Source
Cumulant generating function of any SEM source PS:
φ(ρ; PS) := log∑s,s′
PS,ρ(s′)ΓS(s|s′)1−ρ.
where PS,ρ(s) is the normalized eigenvector corresponding to thePerron-Frobenius eigenvalue of the tilted transition matrixΓS,ρ(s|s′).
For any SEM source PS, the entropy and varentropy are
H(PS) =∑
s′P(s′)
∑s
ΓS(s|s′) log1
ΓS(s|s′), and
V(PS) = φ′′(0; PS).
where P is the stationary distribution of ΓS
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 6 / 17
Information Quantities for the Channel
For a DMC, the following are the capacity and dispersion
C(W) = maxP
I(P,W), and
V(W) =∑
x
P∗(x) var(
logW(Y|x)
P∗W(Y)
).
For the discrete channel with SEM noise,
C(W) = log b− H(PZ), andV(W) = V(PZ).
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 7 / 17
Information Quantities for the Channel
For a DMC, the following are the capacity and dispersion
C(W) = maxP
I(P,W), and
V(W) =∑
x
P∗(x) var(
logW(Y|x)
P∗W(Y)
).
For the discrete channel with SEM noise,
C(W) = log b− H(PZ), andV(W) = V(PZ).
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 7 / 17
Main Result
We assume that
rn =kn
=C(W)
H(PS)− εn where εn → 0, and nε2
n →∞.
Theorem (Moderate Deviations for JSC Coding)
Let PS be a SEM source and W be a DMC or additive channel withSEM noise. Then
limn→∞
− 1nε2
nlog e(f ∗k , ϕ
∗n) =
12V(W,PS)
where the JSC dispersion is
V(W,PS) :=1
H(PS)2
[V(W) +
C(W)
H(PS)V(PS)
].
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 8 / 17
Main Result
We assume that
rn =kn
=C(W)
H(PS)− εn where εn → 0, and nε2
n →∞.
Theorem (Moderate Deviations for JSC Coding)
Let PS be a SEM source and W be a DMC or additive channel withSEM noise. Then
limn→∞
− 1nε2
nlog e(f ∗k , ϕ
∗n) =
12V(W,PS)
where the JSC dispersion is
V(W,PS) :=1
H(PS)2
[V(W) +
C(W)
H(PS)V(PS)
].
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 8 / 17
A Separation Architecture
- - - -sk m xn
f (s)k f (c)k Wn yn
ϕ(c)k ϕ
(s)k
m sk- -
Consider MD regime for DMS-DMC when we are forced to designa source and channel code separately
Optimum error probability [Csiszár (1980)] satisfies
e(
f (s)k , f (c)k , ϕ(s)n , ϕ
(c)n
)≤ exp
(−n sup
R≥0min
{rnEs
( Rrn
),Ec(R)
})
where Es(·) and Ec(·) are the source and channel reliabilityfunctions resp.
R is the rate of the digital interface m
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 9 / 17
A Separation Architecture
- - - -sk m xn
f (s)k f (c)k Wn yn
ϕ(c)k ϕ
(s)k
m sk- -
Consider MD regime for DMS-DMC when we are forced to designa source and channel code separately
Optimum error probability [Csiszár (1980)] satisfies
e(
f (s)k , f (c)k , ϕ(s)n , ϕ
(c)n
)≤ exp
(−n sup
R≥0min
{rnEs
( Rrn
),Ec(R)
})
where Es(·) and Ec(·) are the source and channel reliabilityfunctions resp.
R is the rate of the digital interface m
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 9 / 17
A Separation Architecture
- - - -sk m xn
f (s)k f (c)k Wn yn
ϕ(c)k ϕ
(s)k
m sk- -
Consider MD regime for DMS-DMC when we are forced to designa source and channel code separately
Optimum error probability [Csiszár (1980)] satisfies
e(
f (s)k , f (c)k , ϕ(s)n , ϕ
(c)n
)≤ exp
(−n sup
R≥0min
{rnEs
( Rrn
),Ec(R)
})
where Es(·) and Ec(·) are the source and channel reliabilityfunctions resp.
R is the rate of the digital interface m
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 9 / 17
Loss Due to Separation
Consider the approximations
Es(H(PS) + ξ) =ξ2
2V(PS)+ O(ξ3), Ec(C(W)− ξ) =
ξ2
2V(W)+ O(ξ3).
Then we can show by optimizing ξ that
lim infn→∞
− 1nε2
nlog e
(f (s)k , f (c)k , ϕ
(s)n , ϕ
(c)n
)≥ 1
2Vsep(W,PS)
where
Vsep(W,PS) = V(W,PS) +2√
C(W)H(PS)
V(W)V(PS)
H(PS)2
Final term is the cost of separation in the MD regime
Compare to Wang-Ingber-Kochman (2011) and Kostina-Verdú(2013) for the dispersion analogue of the loss
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 10 / 17
Loss Due to Separation
Consider the approximations
Es(H(PS) + ξ) =ξ2
2V(PS)+ O(ξ3), Ec(C(W)− ξ) =
ξ2
2V(W)+ O(ξ3).
Then we can show by optimizing ξ that
lim infn→∞
− 1nε2
nlog e
(f (s)k , f (c)k , ϕ
(s)n , ϕ
(c)n
)≥ 1
2Vsep(W,PS)
where
Vsep(W,PS) = V(W,PS) +2√
C(W)H(PS)
V(W)V(PS)
H(PS)2
Final term is the cost of separation in the MD regime
Compare to Wang-Ingber-Kochman (2011) and Kostina-Verdú(2013) for the dispersion analogue of the loss
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 10 / 17
Loss Due to Separation
Consider the approximations
Es(H(PS) + ξ) =ξ2
2V(PS)+ O(ξ3), Ec(C(W)− ξ) =
ξ2
2V(W)+ O(ξ3).
Then we can show by optimizing ξ that
lim infn→∞
− 1nε2
nlog e
(f (s)k , f (c)k , ϕ
(s)n , ϕ
(c)n
)≥ 1
2Vsep(W,PS)
where
Vsep(W,PS) = V(W,PS) +2√
C(W)H(PS)
V(W)V(PS)
H(PS)2
Final term is the cost of separation in the MD regime
Compare to Wang-Ingber-Kochman (2011) and Kostina-Verdú(2013) for the dispersion analogue of the loss
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 10 / 17
Loss Due to Separation
Consider the approximations
Es(H(PS) + ξ) =ξ2
2V(PS)+ O(ξ3), Ec(C(W)− ξ) =
ξ2
2V(W)+ O(ξ3).
Then we can show by optimizing ξ that
lim infn→∞
− 1nε2
nlog e
(f (s)k , f (c)k , ϕ
(s)n , ϕ
(c)n
)≥ 1
2Vsep(W,PS)
where
Vsep(W,PS) = V(W,PS) +2√
C(W)H(PS)
V(W)V(PS)
H(PS)2
Final term is the cost of separation in the MD regime
Compare to Wang-Ingber-Kochman (2011) and Kostina-Verdú(2013) for the dispersion analogue of the loss
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 10 / 17
Achievability Proof for SEM source + DMC
Now we show how to prove
limn→∞
− 1nε2
nlog e(f ∗k , ϕ
∗n) =
12V(W,PS)
where the JSC dispersion is
V(W,PS) :=1
H(PS)2
[V(W) +
C(W)
H(PS)V(PS)
].
By Problem 5.16 in Gallager’s book, exists a (k, n)-JSC code s.t.
e(fk, ϕn) ≤(∑
sk
PSk (sk)1
1+τ
)1+τ∑yn
(∑xn
Pn(xn)Wn(yn|xn)1
1+τ
)1+τ
for any 0 ≤ τ ≤ 1 and P ∈ P(X ).
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 11 / 17
Achievability Proof for SEM source + DMC
Now we show how to prove
limn→∞
− 1nε2
nlog e(f ∗k , ϕ
∗n) =
12V(W,PS)
where the JSC dispersion is
V(W,PS) :=1
H(PS)2
[V(W) +
C(W)
H(PS)V(PS)
].
By Problem 5.16 in Gallager’s book, exists a (k, n)-JSC code s.t.
e(fk, ϕn) ≤(∑
sk
PSk (sk)1
1+τ
)1+τ∑yn
(∑xn
Pn(xn)Wn(yn|xn)1
1+τ
)1+τ
for any 0 ≤ τ ≤ 1 and P ∈ P(X ).
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 11 / 17
Rényi Entropy and Gallager’s function
Using the reparametrization τ = −θ/(1 + θ), we obtain
e(fk, ϕn) ≤ exp(− (k − 1)H1+θ(PS)− δ(θ)
1 + θ
)exp
(− nEo(τ,P,W)
)where δ(θ) = const [bound by Watanabe-Hayashi (2013)]
The Rényi entropy is
H1+θ(PS) := −1θ
∑s,s′
PS,θ(s′)ΓS,θ(s|s′)
Gallager’s channel function
Eo(τ,P,W) := − log∑
x
P(x)(∑
y
W(y|x)1
1+τ
)1+τ
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 12 / 17
Rényi Entropy and Gallager’s function
Using the reparametrization τ = −θ/(1 + θ), we obtain
e(fk, ϕn) ≤ exp(− (k − 1)H1+θ(PS)− δ(θ)
1 + θ
)exp
(− nEo(τ,P,W)
)where δ(θ) = const [bound by Watanabe-Hayashi (2013)]
The Rényi entropy is
H1+θ(PS) := −1θ
∑s,s′
PS,θ(s′)ΓS,θ(s|s′)
Gallager’s channel function
Eo(τ,P,W) := − log∑
x
P(x)(∑
y
W(y|x)1
1+τ
)1+τ
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 12 / 17
Rényi Entropy and Gallager’s function
Using the reparametrization τ = −θ/(1 + θ), we obtain
e(fk, ϕn) ≤ exp(− (k − 1)H1+θ(PS)− δ(θ)
1 + θ
)exp
(− nEo(τ,P,W)
)where δ(θ) = const [bound by Watanabe-Hayashi (2013)]
The Rényi entropy is
H1+θ(PS) := −1θ
∑s,s′
PS,θ(s′)ΓS,θ(s|s′)
Gallager’s channel function
Eo(τ,P,W) := − log∑
x
P(x)(∑
y
W(y|x)1
1+τ
)1+τ
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 12 / 17
Behavior of Rényi entropy and Gallager function
In the neighborhood of θ, τ = 0, we have
θH1+θ(PS) = θH(PS)− θ2
2V(PS) + O(θ3)
Eo(τ,P,W) := τ I(P,W)− τ 2
2V(P,W) + O(τ 3)
Plug these into the Gallager bound and optimize τ
τ =εnH(PS)
V(W) + C(W)H(PS)
V(PS)
to obtain
e(fk, ϕn) ≤ exp(− n[ ε2
n
V(W,PS)+ O(ε3
n)])
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 13 / 17
Behavior of Rényi entropy and Gallager function
In the neighborhood of θ, τ = 0, we have
θH1+θ(PS) = θH(PS)− θ2
2V(PS) + O(θ3)
Eo(τ,P,W) := τ I(P,W)− τ 2
2V(P,W) + O(τ 3)
Plug these into the Gallager bound and optimize τ
τ =εnH(PS)
V(W) + C(W)H(PS)
V(PS)
to obtain
e(fk, ϕn) ≤ exp(− n[ ε2
n
V(W,PS)+ O(ε3
n)])
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 13 / 17
Converse Proof for SEM source + DMC: Source Part
kij(sk): number of transitions from i to j in sk with cyclic convention
[kij(sk)/k]i,j is the Markov type [Davisson et al. (1981), Natarajan(1985)] of sk
Type class T kP ; Set of all Markov types P(2)
k (S); Restricted Markovtype class
T kP (i, j) := {sk ∈ T k
P : s1 = i, sk = j}
P(2)k (S; R) := {(i, j,P) ∈ S2 × P(2)
k (S) : |T kP (i, j) ≤ 2kR}
Lemma (Source coding)
Let P∗e (M,PS) be the smallest error for source PS with size ≤ M. Then
P∗e(|S|(k + 1)|S|
22kR,PSk
)≤
∑(i,j,P)/∈P(2)
k (S;R)T k
P (i,j)6=∅
∑sk∈TP(i,j)
PSk (sk)
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 14 / 17
Converse Proof for SEM source + DMC: Source Part
kij(sk): number of transitions from i to j in sk with cyclic convention
[kij(sk)/k]i,j is the Markov type [Davisson et al. (1981), Natarajan(1985)] of sk
Type class T kP ; Set of all Markov types P(2)
k (S); Restricted Markovtype class
T kP (i, j) := {sk ∈ T k
P : s1 = i, sk = j}
P(2)k (S; R) := {(i, j,P) ∈ S2 × P(2)
k (S) : |T kP (i, j) ≤ 2kR}
Lemma (Source coding)
Let P∗e (M,PS) be the smallest error for source PS with size ≤ M. Then
P∗e(|S|(k + 1)|S|
22kR,PSk
)≤
∑(i,j,P)/∈P(2)
k (S;R)T k
P (i,j)6=∅
∑sk∈TP(i,j)
PSk (sk)
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 14 / 17
Converse Proof for SEM source + DMC: Source Part
kij(sk): number of transitions from i to j in sk with cyclic convention
[kij(sk)/k]i,j is the Markov type [Davisson et al. (1981), Natarajan(1985)] of sk
Type class T kP ; Set of all Markov types P(2)
k (S); Restricted Markovtype class
T kP (i, j) := {sk ∈ T k
P : s1 = i, sk = j}
P(2)k (S; R) := {(i, j,P) ∈ S2 × P(2)
k (S) : |T kP (i, j) ≤ 2kR}
Lemma (Source coding)
Let P∗e (M,PS) be the smallest error for source PS with size ≤ M. Then
P∗e(|S|(k + 1)|S|
22kR,PSk
)≤
∑(i,j,P)/∈P(2)
k (S;R)T k
P (i,j)6=∅
∑sk∈TP(i,j)
PSk (sk)
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 14 / 17
Converse Proof for SEM source + DMC: Source Part
kij(sk): number of transitions from i to j in sk with cyclic convention
[kij(sk)/k]i,j is the Markov type [Davisson et al. (1981), Natarajan(1985)] of sk
Type class T kP ; Set of all Markov types P(2)
k (S); Restricted Markovtype class
T kP (i, j) := {sk ∈ T k
P : s1 = i, sk = j}
P(2)k (S; R) := {(i, j,P) ∈ S2 × P(2)
k (S) : |T kP (i, j) ≤ 2kR}
Lemma (Source coding)
Let P∗e (M,PS) be the smallest error for source PS with size ≤ M. Then
P∗e(|S|(k + 1)|S|
22kR,PSk
)≤
∑(i,j,P)/∈P(2)
k (S;R)T k
P (i,j)6=∅
∑sk∈TP(i,j)
PSk (sk)
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 14 / 17
Converse Proof: Joint Error + Source Error Approx
Let P∗e (M,W) be the smallest error for channel W with M codewords.
Lemma (Source-channel coding)
For any R ≥ 0, we have
e(f ∗k , ϕ∗n) ≥ P∗e
(|S|(k + 1)|S|
22kR,PSk
)· P∗e (2kR,Wn)
First term represents source error; Second term channel error
Lemma (Approximation of source exponent)
Let R = H(PS) + δεn. For small ξ > 0 and n� 1,
− log P∗e(|S|(k + 1)|S|
22kR,PSk
)≤ (δ + ξ)2
2V(PS)kε2
n
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 15 / 17
Converse Proof: Joint Error + Source Error Approx
Let P∗e (M,W) be the smallest error for channel W with M codewords.
Lemma (Source-channel coding)
For any R ≥ 0, we have
e(f ∗k , ϕ∗n) ≥ P∗e
(|S|(k + 1)|S|
22kR,PSk
)· P∗e (2kR,Wn)
First term represents source error; Second term channel error
Lemma (Approximation of source exponent)
Let R = H(PS) + δεn. For small ξ > 0 and n� 1,
− log P∗e(|S|(k + 1)|S|
22kR,PSk
)≤ (δ + ξ)2
2V(PS)kε2
n
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 15 / 17
Converse Proof: Joint Error + Source Error Approx
Let P∗e (M,W) be the smallest error for channel W with M codewords.
Lemma (Source-channel coding)
For any R ≥ 0, we have
e(f ∗k , ϕ∗n) ≥ P∗e
(|S|(k + 1)|S|
22kR,PSk
)· P∗e (2kR,Wn)
First term represents source error; Second term channel error
Lemma (Approximation of source exponent)
Let R = H(PS) + δεn. For small ξ > 0 and n� 1,
− log P∗e(|S|(k + 1)|S|
22kR,PSk
)≤ (δ + ξ)2
2V(PS)kε2
n
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 15 / 17
Converse Proof: Channel Error Approx
By Wolfowitz’s strong converse and Haroutunian’s sphere-packingbound [cf. Altug-Wagner (2014)], we have:
Lemma (Approximation of channel exponent)
For every γ > 0 and ψ <∞, there exists n� 1 s.t.
− log P∗e (M,Wn) ≤ n1− γ
[E( log M
n− ψ√
n
)+
1n
]
In the rest of the proof, we choose
log M = k(H(PS) + δεn
)Plug the approximations of the source and channel exponents intothe source-channel bound
Optimize constant δ and take ξ, γ ↓ 0 and we are done
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 16 / 17
Converse Proof: Channel Error Approx
By Wolfowitz’s strong converse and Haroutunian’s sphere-packingbound [cf. Altug-Wagner (2014)], we have:
Lemma (Approximation of channel exponent)
For every γ > 0 and ψ <∞, there exists n� 1 s.t.
− log P∗e (M,Wn) ≤ n1− γ
[E( log M
n− ψ√
n
)+
1n
]
In the rest of the proof, we choose
log M = k(H(PS) + δεn
)
Plug the approximations of the source and channel exponents intothe source-channel bound
Optimize constant δ and take ξ, γ ↓ 0 and we are done
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 16 / 17
Converse Proof: Channel Error Approx
By Wolfowitz’s strong converse and Haroutunian’s sphere-packingbound [cf. Altug-Wagner (2014)], we have:
Lemma (Approximation of channel exponent)
For every γ > 0 and ψ <∞, there exists n� 1 s.t.
− log P∗e (M,Wn) ≤ n1− γ
[E( log M
n− ψ√
n
)+
1n
]
In the rest of the proof, we choose
log M = k(H(PS) + δεn
)Plug the approximations of the source and channel exponents intothe source-channel bound
Optimize constant δ and take ξ, γ ↓ 0 and we are done
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 16 / 17
Converse Proof: Channel Error Approx
By Wolfowitz’s strong converse and Haroutunian’s sphere-packingbound [cf. Altug-Wagner (2014)], we have:
Lemma (Approximation of channel exponent)
For every γ > 0 and ψ <∞, there exists n� 1 s.t.
− log P∗e (M,Wn) ≤ n1− γ
[E( log M
n− ψ√
n
)+
1n
]
In the rest of the proof, we choose
log M = k(H(PS) + δεn
)Plug the approximations of the source and channel exponents intothe source-channel bound
Optimize constant δ and take ξ, γ ↓ 0 and we are doneVincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 16 / 17
Remarks and Conclusion
Easy to extend analysis to discrete channels with SEM noiseusing ideas from Watanabe-Hayashi (2013)
Considered 2 class partition into P(2)k (S; R) and its complement;
cf. Tauste-Campo et al. (2014), Shkel-Tan-Draper (2014)
Zhong-Alajaji-Campbell (2007) considered a finer partition into(k + 1)|S|
2source type classes, that if used here requires
ε2nn
log n→∞,
a weaker result.
Future work:
CLT regime for Markov JSC
Lossy JSC
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 17 / 17
Remarks and Conclusion
Easy to extend analysis to discrete channels with SEM noiseusing ideas from Watanabe-Hayashi (2013)
Considered 2 class partition into P(2)k (S; R) and its complement;
cf. Tauste-Campo et al. (2014), Shkel-Tan-Draper (2014)
Zhong-Alajaji-Campbell (2007) considered a finer partition into(k + 1)|S|
2source type classes, that if used here requires
ε2nn
log n→∞,
a weaker result.
Future work:
CLT regime for Markov JSC
Lossy JSC
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 17 / 17
Remarks and Conclusion
Easy to extend analysis to discrete channels with SEM noiseusing ideas from Watanabe-Hayashi (2013)
Considered 2 class partition into P(2)k (S; R) and its complement;
cf. Tauste-Campo et al. (2014), Shkel-Tan-Draper (2014)
Zhong-Alajaji-Campbell (2007) considered a finer partition into(k + 1)|S|
2source type classes, that if used here requires
ε2nn
log n→∞,
a weaker result.
Future work:
CLT regime for Markov JSC
Lossy JSC
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 17 / 17
Remarks and Conclusion
Easy to extend analysis to discrete channels with SEM noiseusing ideas from Watanabe-Hayashi (2013)
Considered 2 class partition into P(2)k (S; R) and its complement;
cf. Tauste-Campo et al. (2014), Shkel-Tan-Draper (2014)
Zhong-Alajaji-Campbell (2007) considered a finer partition into(k + 1)|S|
2source type classes, that if used here requires
ε2nn
log n→∞,
a weaker result.
Future work:
CLT regime for Markov JSC
Lossy JSC
Vincent Y. F. Tan (NUS) Moderate Deviations JSCC ISIT 2014 17 / 17