A relaying graph and special strong product for zero-error ...

7
A relaying graph and special strong product for zero-error problems in primitive relay channels Meysam Asadi * , Kenneth Palacio-Baus *† , Natasha Devroye * * University of Illinois at Chicago, {masadi, kpalac2, devroye}@uic.edu University of Cuenca, Ecuador Abstract—A primitive relay channel (PRC) has one source (S) communicating a message to one destination (D) with the help of a relay (R). The link between R and D is considered to be noiseless, of finite capacity, and parallel to the link between S and (R,D). Prior work has established, for any fixed number of channel uses, the minimal R-D link rate needed so that the overall S-D message rate equals the zero-error single- input multiple output outer bound (Problem 1). The zero-error relaying scheme was expressed as a coloring of a carefully defined “relaying compression graph”. It is shown here that this relaying compression graph for n channel uses is not obtained as a strong product from its n =1 instance. Here we define a new graph, the “primitive relaying graph” and a new “special strong product” such that the n-channel use primitive relaying graph corresponds to the n-fold special strong product of the n =1 graph. We show how the solution to Problem 1 can be obtained from this new primitive relaying graph directly. Further study of this primitive relaying graph has the potential to highlight the structure of optimal codes for zero-error relaying. I. I NTRODUCTION AND CONTRIBUTION A primitive relay channel (X ,p(y,y R |x), Y×Y R ,k) (PRC), shown in Fig. 1, consists of a family of conditional probability mass functions p(y,y R |x) relating inputs x ∈X to outputs y ∈Y and y R ∈Y R at the destination and relay respectively, and an out-of-band link between the relay and destination able to support up to k R + bits/channel use. Quantities of interest may then be the maximal number of codewords (the message rate) that can be reliably commu- nicated for a given R-D link rate k (the relay rate), or the minimal relay rate needed to transmit at a desired message rate (provided the desired message rate is feasible at all). When the R-D link rate k is large enough, the relay can forward its entire observation to the destination terminal. Thus, the primitive re- lay channel effectively turns into a point-to-point channel with a single input and two outputs, say (X ,p(y,y R |x), Y×Y R ), for which we have finite n (or n-shot, where n is the number of channel uses) and asymptotic expressions (though they cannot currently be calculated / evaluated) for the zero-error capacity. This capacity is an upper bound to the message rate achievable for any finite relay rate. The question (Problem 1) posed and solved in [1] is, for fixed number of channel uses n, the minimal relay rate needed to ensure that the overall message rate achieves the capacity of the single-input multiple The work of the authors was partially funded by NSF under awards 1645381 and 1053933. The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the NSF. output (SIMO) channel (X ,p(y,y R |x), Y×Y R ). The small- error version of this question was considered in [2], [3], [4] and remains open. Recent work by Wu and ¨ Ozg¨ ur [5] has shown that in general, the cut-set bound is loose for PRCs. Contributions. The zero-error relaying scheme solving Problem 1 was expressed in [1] as a coloring of a carefully defined “relaying compression graph”. It is shown here that this relaying compression graph for n channel uses cannot be obtained by a strong product from its n =1 instance (shown in Example 3, as notation is needed). In seeking to understand the structure of zero-error relaying, one question is whether a graph characterizing this form of zero-error relaying admits any strong product form at all. We answer this in the positive here through the definition of a new graph, the “primitive relaying graph” (PRG) and a new “special strong product” such that the n-channel use PRG corresponds to the n-fold special strong product of the n =1 PRG. We show how the solution to Problem 1 expressed as in [1] can be obtained from this new primitive relaying graph. For binary ||X || = ||Y|| = ||Y R || we solve Problem 1 exactly. We provide several examples of how to use the newly proposed PRG and the special strong product. We note that due to space constraints, all proofs are relegated to the Appendix in the extended version of this manuscript, available at https://www.ece.uic.edu/bin/view/Devroye/Publications. S R Y n R D Perfect link, out of band, capacity k X n Y n W R := h(Y n R ) ∈{1, ··· , W R } ˆ X := g(Y n ,W R ) X ∈X ⊆X n Fig. 1. An n-shot protocol (n, X , h, g) for zero-error communication over a PRC: codebook X , relaying function h, decoding function g. II. PROBLEM DEFINITION An n-shot protocol for n 1 channel uses de- noted by (n, X , h, g) for communication over the PRC (X ,p(y,y R |x), Y×Y R ,k), shown in Figure 1 comprises a codebook X ⊆X n , a relaying function h : Y n R →W R , and a decoding function g : Y n ×W R →X . When a conditional joint pmf p(y,y R |x) with support X and output Y×Y R is restricted to input K, we denote its induced conditional pmf, support, and output by p K (y,y R |x), K and Y| K ×Y R | K respectively. Let R (n) z := 1 n log kX k be the message rate, and r (n) z := 1 n log ||W R || be the relay rate. Then, a rate pair (R (n) z ,r (n) z )

Transcript of A relaying graph and special strong product for zero-error ...

Page 1: A relaying graph and special strong product for zero-error ...

A relaying graph and special strong product forzero-error problems in primitive relay channels

Meysam Asadi∗, Kenneth Palacio-Baus∗†, Natasha Devroye∗∗University of Illinois at Chicago, {masadi, kpalac2, devroye}@uic.edu

†University of Cuenca, Ecuador

Abstract—A primitive relay channel (PRC) has one source(S) communicating a message to one destination (D) with thehelp of a relay (R). The link between R and D is consideredto be noiseless, of finite capacity, and parallel to the linkbetween S and (R,D). Prior work has established, for any fixednumber of channel uses, the minimal R-D link rate needed sothat the overall S-D message rate equals the zero-error single-input multiple output outer bound (Problem 1). The zero-errorrelaying scheme was expressed as a coloring of a carefully defined“relaying compression graph”. It is shown here that this relayingcompression graph for n channel uses is not obtained as a strongproduct from its n = 1 instance. Here we define a new graph, the“primitive relaying graph” and a new “special strong product”such that the n-channel use primitive relaying graph correspondsto the n-fold special strong product of the n = 1 graph. We showhow the solution to Problem 1 can be obtained from this newprimitive relaying graph directly. Further study of this primitiverelaying graph has the potential to highlight the structure ofoptimal codes for zero-error relaying.

I. INTRODUCTION AND CONTRIBUTION

A primitive relay channel (X , p(y, yR|x),Y × YR, k)(PRC), shown in Fig. 1, consists of a family of conditionalprobability mass functions p(y, yR|x) relating inputs x ∈ Xto outputs y ∈ Y and yR ∈ YR at the destination and relayrespectively, and an out-of-band link between the relay anddestination able to support up to k ∈ R+ bits/channel use.

Quantities of interest may then be the maximal number ofcodewords (the message rate) that can be reliably commu-nicated for a given R-D link rate k (the relay rate), or theminimal relay rate needed to transmit at a desired message rate(provided the desired message rate is feasible at all). When theR-D link rate k is large enough, the relay can forward its entireobservation to the destination terminal. Thus, the primitive re-lay channel effectively turns into a point-to-point channel witha single input and two outputs, say (X , p(y, yR|x),Y × YR),for which we have finite n (or n-shot, where n is the numberof channel uses) and asymptotic expressions (though theycannot currently be calculated / evaluated) for the zero-errorcapacity. This capacity is an upper bound to the message rateachievable for any finite relay rate. The question (Problem 1)posed and solved in [1] is, for fixed number of channel usesn, the minimal relay rate needed to ensure that the overallmessage rate achieves the capacity of the single-input multiple

The work of the authors was partially funded by NSF under awards 1645381and 1053933. The contents of this article are solely the responsibility of theauthors and do not necessarily represent the official views of the NSF.

output (SIMO) channel (X , p(y, yR|x),Y × YR). The small-error version of this question was considered in [2], [3], [4]and remains open. Recent work by Wu and Ozgur [5] hasshown that in general, the cut-set bound is loose for PRCs.

Contributions. The zero-error relaying scheme solvingProblem 1 was expressed in [1] as a coloring of a carefullydefined “relaying compression graph”. It is shown here thatthis relaying compression graph for n channel uses cannotbe obtained by a strong product from its n = 1 instance(shown in Example 3, as notation is needed). In seeking tounderstand the structure of zero-error relaying, one questionis whether a graph characterizing this form of zero-errorrelaying admits any strong product form at all. We answerthis in the positive here through the definition of a new graph,the “primitive relaying graph” (PRG) and a new “specialstrong product” such that the n-channel use PRG correspondsto the n-fold special strong product of the n = 1 PRG.We show how the solution to Problem 1 expressed as in[1] can be obtained from this new primitive relaying graph.For binary ||X || = ||Y|| = ||YR|| we solve Problem 1exactly. We provide several examples of how to use thenewly proposed PRG and the special strong product. We notethat due to space constraints, all proofs are relegated to theAppendix in the extended version of this manuscript, availableat https://www.ece.uic.edu/bin/view/Devroye/Publications.

S

R

YnR

D

Perfect link, out of band, capacity k

Xn Y n

WR := h(Y nR ) ∈ {1, · · · , ‖WR‖}

X := g(Y n,WR)X ∈ X ⊆ Xn

Fig. 1. An n-shot protocol (n,X , h, g) for zero-error communication overa PRC: codebook X , relaying function h, decoding function g.

II. PROBLEM DEFINITION

An n-shot protocol for n ≥ 1 channel uses de-noted by (n,X , h, g) for communication over the PRC(X , p(y, yR|x),Y × YR, k), shown in Figure 1 comprises acodebook X ⊆ Xn, a relaying function h : Yn

R →WR, and adecoding function g : Yn ×WR → X .

When a conditional joint pmf p(y, yR|x) with support Xand output Y × YR is restricted to input K, we denote itsinduced conditional pmf, support, and output by pK(y, yR|x),K and Y|K × YR|K respectively.

Let R(n)z := 1

n log ‖X‖ be the message rate, and r(n)z :=

1n log ||WR|| be the relay rate. Then, a rate pair (R

(n)z , r

(n)z )

Page 2: A relaying graph and special strong product for zero-error ...

is said to be achievable if first, zero error communication isattainable, i.e. Pr[g(Y n,WR) 6= X | X sent] = 0 for all X ∈X , and second, there exists an n-shot protocol (n,X , h, g)such that the relaying function uses less than k bits per channeluse on average, i.e. r(n)

z := 1n log ||WR|| ≤ k. For n channel

uses, we set C(n)z (k) to be the maximum R

(n)z , such that there

exists an achievable (R(n)z , r

(n)z ) pair (i.e. for which r(n)

z ≤ k).The zero-error capacity of the relay channel at R-D link ratek, defined as Cz(k), is the supremum over n of C(n)

z (k).Some notation. We call one usage of the channel “one-

shot”, and this is governed by p(y, yR|x). If we use thechannel twice, we use the notation p(y1y2, yR1

yR2|x1x2) =

p(y1, yR1|x1) · p(y2, yR2

|x2) to denote two channel uses.A graph G(V,E) consists of a set V of vertices (sometimes

denoted as V (G)) or nodes together with a set E of edges(sometimes denoted as E(G)), which are two-element subsetsof V . Two nodes connected by an edge are called adjacent.We will usually drop the V,E indices in G(V,E). Let ∼=denote the isomorphism relation between two graphs. Theadjacency matrix A of a graph G is a matrix with rowsand columns labeled by graph vertices, with a 1 or 0 inposition (vi, vj) indicating whether vi and vj are adjacent ornot. Our convention is to put 0’s on the diagonal (nodes arenot adjacent to themselves). The strong product G � H oftwo graphs G and H is defined as the graph with vertex setV (G � H) = V (G) × V (H), in which two distinct vertices(g, h) and (g′, h′) are adjacent iff g is adjacent or equal to g′

in G and h is adjacent or equal to h′ in H . G�n denotes thestrong product of n copies of G.

A. Problem 1: Minimal R-D link rate r∗(n)z needed to achieve

the SIMO bound for a fixed n:

For fixed number of channel uses n, by giving the destina-tion the relay output ynR, one obtains the single-input multipleoutput outer bound SIMO(n) on C(n)

z (k):

SIMO(n) := log n

√α([GX|Y,YR

]�n),

where the confusability graph GX|Y,YRhas vertices the input

alphabet X and an edge between x 6= x′ if there exists a(y, yR) ∈ Y × YR such that p(y, yR|x) · p(y, yR|x′) > 0.Clearly, if k ≥ log ||YR|| this bound is achievable by simplyhaving the relay forward the received Y n

R sequence perfectlyover the out-of-band link. An interesting question is how smallthis link rate k may be so as to hit this SIMO(n) upper bound,i.e. to find r∗(n)

z defined as:

Definition 1 (The n-shot minimum R-D link rate r∗(n)z to

achieve the n-shot SIMO bound).

r∗(n)z := min{rz : C(n)

z (rz) = SIMO(n)} . (1)

This was solved for fixed n in [1], where r∗(n)z is exactly

characterized as an n-letter extension of the one-shot “color-and-forward” scheme, outlined next.

Problem 1 can be re-stated and solved as follows. For thePRC (X , p(y, yR|x),Y × YR, log ||YR||) (where the R-D linkrate is initially set to log ||YR|| for notational convenience),let K be a maximal independent set of the confusability graphGX|Y,YR

. Note that K need not be unique.

Question. Find the minimum cardinality relaying functionh : YR|K → WR (and its cardinality) such that (Y, h(YR))can distinguish X without error, namely, for each (y, h(yR))with positive probability, there exists at most one x ∈ Kfor which p(y, h(yR)|x) > 0. This is a reformulation of theproblem of finding r∗(1)

z for the PRC (K, pK(y, yR|x),Y|K ×YR|K, log ||YR||). To answer this, we construct a new graph,the relaying compression graph G(1)

R |K, for which a coloringprovides the optimal relaying scheme. G(1)

R |K has:• Vertices: YR|K• Edges: vertices yR 6= y′R both in YR|K share an edge

when ∃x ∈ K and ∃x′ ∈ K, x 6= x′ such thatp(y, yR|x) > 0 and p(y, y′R|x′) > 0 for some y ∈ Y|K.

Answer: Coloring of the relaying compression graphG

(1)R|K . Color the graph at its chromatic number and let h(yR)

be one of the corresponding minimal colorings. We term thisprocessing and forwarding at the relay as “color-and-forward”relaying. It is easy to see how G

(1)R |K may be extended to n

channel uses to obtain G(n)R |K(n) . Then, [1] showed:

Theorem 1. (Chen, Devroye [1]) For the PRC,

r∗(n)z = min

K(n): K(n) is a max. ind. set of G�nX|Y,YR

1

nlogχ(G

(n)R |K(n)),

(2)where χ(G

(n)R |K(n)) is the chromatic number of graph

G(n)R |K(n) , constructed via the algorithm described in the

Answer above with restricted input / codebook K(n).

III. PRIMITIVE RELAYING GRAPH

The main contribution is the introduction of the “primitiverelaying graph” (PRG), and a new “special strong product”(SSP) that allows the graph G

(n)R needed to describe the

optimal relaying scheme (optimal in the sense of Problem 1)to be computed recursively.

In [1], given an independent set K of confusability graphGX|Y,YR

the relay compression graph G(1)R |K places an edge

between vertices yR 6= y′R both in YR|K when ∃x ∈ K and∃x′ ∈ K, x 6= x′ such that p(y, yR|x) > 0 and p(y, y′R|x′) > 0for some y ∈ Y|K. For (n = 2) an edge between verticesyR1

yR26= y′R1

y′R2both in Y

(2)

R|K(2) in G(2)R |K(2) exists when

∃x1x2 ∈ K(2) and ∃x′1x′2 ∈ K(2), x1x2 6= x′1x′2 such that both

p(y1y2, yR1yR2|x1x2) > 0, and p(y1y2, y

′R1y′R2|x′1x′2) > 0 for

some y1y2 ∈ Y2|K(2) . The question is whether G(2)R can be

obtained from a strong product of G(1)R with itself. Example

3 shows that this is not possible even when the confusabilitygraph is edge free, the main reason being that the regularstrong product of the compression graph ignores the delicateinteraction between the YR’s and the channel inputs (it is theinputs we want to distinguish at the destination, not the YR’s).

A. Graph Definition

From the PRC (X , p(y, yR|x),Y×YR, k), we introduce thePrimitive Relaying Graph (PRG) denoted by GX,YR|Y . Thevertices of GX,YR|Y consist of a set V ⊆ X × YR of pairsv = (x, yR) corresponding to a given input x ∈ X and a relay

Page 3: A relaying graph and special strong product for zero-error ...

channel output yR ∈ YR, so that p(y, yR|x) > 0 for somey ∈ Y . Let Vx and VyR

denote the sets of all the x-coordinates,and yR coordinates of V , respectively. The edge set E consistsof pairs of vertices e = (v, v′) , (x, yR)(x′, y′R) connectedby an edge in the PRG, and is given by E = Es ∪ Ed ∪ Ec,where Es, Ed, Ec are mutually disjoint sets defined as:

1) Solid edge set Es , {(x, yR)(x′, y′R) : x 6= x′, yR 6= y′Rand ∃y : p(y|x, yR) · p(y|x′, y′R) > 0}. A solid edge

occurs whenever two different channel inputs x, x′ pro-duce different relay outputs yR, y′R for some y. TheseyR’s cannot be compressed.

2) Dotted edge set Ed , {(x, yR)(x, y′R) : yR 6= y′Rand ∃y : p(y|x, yR)·p(y|x, y′R) > 0}. This type of edge

arises when for a single channel input x and channeloutput y, there are two different yR and y′R relay outputs.These yR’s in theory be compressed / assigned the samecolor for a single channel use, but must be distinguishedbecause when looking at multiple channel uses, whencombined with a confusable or solid edge, may lead toa solid edge in which the yR’s must be distinguished.

3) Confusable edge set Ec , {(x, yR)(x′, yR) : x 6= x′

and ∃y : p(y|x, yR) · p(y|x′, yR) > 0}. A confusableedge corresponds to the case of two different channelinputs x, x′ that produce the same relay output yRfor the same channel output y. These x’s cannot bedistinguished even when y and yR are both available.These edges will yield the confusability graph GX|Y,YR

.Our notation for the graph GX,YR|Y is reminiscent of theconfusability graph notation in point-to-point channels, GX|Y ,where an edge between two vertices x 6= x′ in X exists if thereexists a y : p(y|x)p(y|x′) > 0 (i.e. two inputs are confusable).In the above, we have similar notions for confusable (x, yR)pairs, but since the destination, who has access to y, caresonly about recovering x and not necessarily yR (or only tothe extent that it may help in recovering x), different types ofedges are relevant. To the best of our knowledge, this type ofgraph has not been considered.

B. Special Strong Product

Due to presence of several types of edges in the PRG weintroduce a new graph product, which we term the “specialstrong product” (SSP) and denoted by �, for two PRG G andH (G�H), as (following graph products definitions [6]):• V (G�H) is the Cartesian product of V (G) and V (H).• For any PRG G, there is an incidence function, δG :V (G)× V (G)→ {∆, s, d, c, 0} defined as:

δG((x1, yR1),(x2, yR2

))=

∆ if (x1, yR1) = (x2, yR2)

s if (x1, yR1)(x2, yR2)∈Es

d if (x1, yR1)(x2, yR2

)∈Ed

c if (x1, yR1)(x2, yR2

)∈Ec

0 Otherwise

.

The incidence function between two vertices indicateswhat type of edge exists between them (if any).

• The SSP multiplication table for the � operation is:

� ∆ s d c 0∆ ∆ s d c 0s s s s s 0d d s d s 0c c s s c 00 0 0 0 0 0

Notice that ∆ behaves like a multiplicative identity.• If (x1, yR1

), (x2, yR2) ∈ V (G), (x′1, y

′R1

), (x′2, y′R2

) ∈V (H), then the incidence function for G�H is

δG�H((x1x2, yR1yR2

), (x′1x′2, y′R1y′R2

))

:= δG((x1, yR1), (x2, yR2

)) � δH((x′1, y′R1

), (x′2, y′R2

))

This incidence function indicates the type of edge of G�H: 0 indicates no edge, s a solid, d a dotted and c aconfusable edge.

All finite graphs can be represented by their adjacency matrix.Usually, this matrix consists of binary elements. However,given the distinct types of edges defined for the PRG, theadjacency matrix of a PRG must also carry this information.We do this by allowing elements of the form 0, s, d, or c, toindicate no, solid, dotted, or confusable edges, respectively.Moreover, the elements on the diagonal are all zeros byconvention, and given that this is an undirected graph, theadjacency matrix is symmetric. An example of an adjacencymatrix of a PRG can be seen in Equation (5). We denote theadjacency matrix of GXn,Y n

R |Y n by AGXn,Y nR

|Y n .Properties of SSP. It is easy to verify that the SSP is

commutative, associative, distributive over a disjoint union,and that K is a unit for the SSP if |V (K)| = 1. In addition, theSSP reduces to the regular strong product when all edges areof the “s” type (in Es), i.e. if Ed(G) = Ec(G) = Ed(H) =Ec(H) = ∅, then G�H ∼= G�H .

C. PRC related subgraphs and operations

We show how GXn|Y n,Y nR

and G(n)R from [1] may be ob-

tained from the PRG and its SSP. Given PRG GX,YR|Y (V,E),consider subgraph Hc defined as V (Hc) = V (GX,YR|Y ), andE(Hc) = Ec(GX,YR|Y ) ⊆ E(GX,YR|Y ). Then one can showthe following, which can be extended to any n.

Lemma 2. Graph Hc is px-homomorphic to GX|Y,YR, px :

V (GX,YR|Y )→ Vx is the projection px(x, yR) = x.

For an independent set K ⊆ Vx(GX|Y,YR), let Hs|K denote

the subgraph with V (Hs|K) = {(x, yR) ∈ V : x ∈ K}, andE(Hs|K) = {(x, yR)(x′, y′R) ∈ Es : x ∈ K}.Lemma 3. Given independent set K for confusability graphGX|Y,YR

, graph Hs|K is pyR-homomorphic to the relaying

compression graph GR|K, where pyR: V → VyR

is theprojection pyR

(x, yR) = yR.

Lemma 4. Given PRG GX,YR|Y , for any n ≥ 2

GXn,Y nR |Y n = G�n

X,YR|Y , (3)

where G�nX,YR|Y denotes the n-fold special strong product of

PRG GX,YR|Y and mimics the notation G�n for the conven-

Page 4: A relaying graph and special strong product for zero-error ...

tional n-fold strong product of G. For adjacency matrices:

A(n)GX,YR|Y

=(A

(1)GX,YR|Y

+ ∆I)⊗(A

(n−1)GX,YR|Y

+ ∆I)−∆I,

(4)where A(n)

G denotes the adjacency matrix of the n-fold SSP ofG. The above shows that AGXn,Y n

R|Y n = A

(n)GX,YR|Y

.

We note that in the Kronecker product in (4), multiplicationfollows the SSP multiplication table. We also note that if AΓ

and AΦ are the adjacency matrices of graphs Γ and Φ thenthe adjacency matrix of the (regular) strong product may beobtained from these as (AΓ + I)⊗ (AΦ + I)− I. Our specialstrong product is of a similar form, with the identity matrixpart changed to ∆I (for ∆ defined in the incidence function)and multiplication replaced by the SSP multiplication table.

Example 1. Given the conditional probability mass functionp(y|x, yR) shown in Fig. 2 we obtain the PRG GX,YR|Y usingthe definition in Section III-A. This example then shows howthis PRG can be used to obtain GX|Y,YR

(Lemma 2) andGYR|Ki

(Lemma 3), needed to solve Problem 1 as in [1] andTheorem 1. Here, there are two independent sets of GX|Y,YR

:K1 = {1, 3} and K2 = {2, 3}. From these, Lemma 3 and thePRG allow one to obtain the two compression graphs GYR|Ki

,for i = 1, 2, as in Fig. 2.

4

Then the following is easy to see:

Lemma 4. Graph Hc is px-homomorphic to GX|Y,YR

px is a projection map px : V (GX,YR|Y ) ! Vx

px(x, yR) = x.

For an independent set K ✓ Vx(GX|Y,YR), let Hs|K

the subgraph with V (Hs|K) = {(x, yR) 2 V : x 2E(Hs|K) = {(x, yR)(x0, y0

R) 2 Es : x 2 K}.

Lemma 5. Given independent set K for confusabilityGX|Y,YR

, graph Hs|K is pyR-homomorphic to the

compression graph GR|K, where pyRis a projection

there a conditioning on K missing somewhere in thistion map? I think not, just want to be sure pyR

: Vdefined as pyR

(x, yR) = yR.

Proof. According to the definition of Hs|K, givendent set K, an edge e 2 E(Hs|K) means that therex, x0, yR, y0

R such that e := (x, yR)(x0, y0R) 2 Es.

there exists a y 2 Y such that p(y|x, yR) · p(y|x0, y0R

Using this, it is easy to verify that e 2 E(Hs|K)(pyR

(x, yR), pyR(x0, y0

R)) = (yR, y0R) 2 E(GR|K).

given K, graph Hs|K is homomorphic to GR|K. ???Hs|K ! GR|K).

Need to add a third lemma here that says that youup this PRG graph for n channel uses iteratively and thatlemmas continue to hold for any n – i.e. right now in2 you put one of the main points of the paper, but itan example!

Example 1. This example shows how the PRG canto obtain GX|Y,YR

and GYR|Ki, needed to solve

2 as in [?] and Theorem 1 and its Corollary 2.accomplished by showing that Lemma 4 and PRG Gyields GX|Y,YR

(from which we may obtain itssets Ki), and that the PRG GX,YR|Y and Lemma 5

, wheredefined as

denoteK}, and

graphrelayingmap isprojec-! VyR

indepen-exist

Hence,) > 0.implies

Thus,(pyR

:

can buildthese

Exampleis just

be usedProblemThis isX,YR|Y

independentyield the

relaying compression graphs G(1)R|Ki

.

TABLE ICONDITIONAL PROBABILITY MASS FUNCTION: p(y|x, yR), WHERE

s(p(y|x, yR)) = ⇤ WHEN p(y|x, yR) > 0 AND 0 ELSE.

s(p(y, yR|x))YR YR YR

1 2 3 1 2 3 1 2 3

Y12

⇤ ⇤ 00 ⇤ 0

0 ⇤ 00 0 ⇤

0 0 ⇤⇤ 0 0

X = 1 X = 2 X = 3

Given the conditional probability mass function p(y|x, yR)shown in Table I, we obtain the PRG GX,YR|Y (shown inFigure 2) using the definition in Section III-A. Note that thereare nine edges, seven of them are of the solid-edge type,two are of the dotted-edge type (denoted by d) and one isa confusable-edge (denoted by c).

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

cd

GX,YR,|Y

Fig. 2. Construction of the PRG GX,YR|Y

Using Lemma 4, it is easy to generate graph Hc, the px-homomorphic to PRG confusability graph GX|YR,Y , as shownin Figure 3.

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

c

Graph Hc

1 2

3

Graph GX|Y,YR

Fig. 3. Construction of Graph Hc and its confusiblity graph GX|Y,YR.

There are two independent sets of GX|Y,YR: K1 = {1, 3}

and K2 = {2, 3}. From these, it is possible to generate fromthe PRG, the two compression graphs GYR|Ki

, for i = 1, 2,by means of Lemma 5, shown in Fig. 4.

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

Graph Hs|K1

1 2

3

Graph GR|K1

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

Graph Hs|K2

1 2

3

Graph GR|K2

Fig. 4. Construction of Graph H2|Ki for i = 1, 2 and correspondingcompression graphs GR|Ki

Example 2. This example shows how one may use the specialstrong product to build up the PRG for any n incrementally

4

Then the following is easy to see:

Lemma 4. Graph Hc is px-homomorphic to GX|Y,YR, where

px is a projection map px : V (GX,YR|Y ) ! Vx defined aspx(x, yR) = x.

For an independent set K ✓ Vx(GX|Y,YR), let Hs|K denote

the subgraph with V (Hs|K) = {(x, yR) 2 V : x 2 K}, andE(Hs|K) = {(x, yR)(x0, y0

R) 2 Es : x 2 K}.

Lemma 5. Given independent set K for confusability graphGX|Y,YR

, graph Hs|K is pyR-homomorphic to the relaying

compression graph GR|K, where pyRis a projection map is

there a conditioning on K missing somewhere in this projec-tion map? I think not, just want to be sure pyR

: V ! VyR

defined as pyR(x, yR) = yR.

Proof. According to the definition of Hs|K, given indepen-dent set K, an edge e 2 E(Hs|K) means that there existx, x0, yR, y0

R such that e := (x, yR)(x0, y0R) 2 Es. Hence,

there exists a y 2 Y such that p(y|x, yR) · p(y|x0, y0R) > 0.

Using this, it is easy to verify that e 2 E(Hs|K) implies(pyR

(x, yR), pyR(x0, y0

R)) = (yR, y0R) 2 E(GR|K). Thus,

given K, graph Hs|K is homomorphic to GR|K. ??? (pyR:

Hs|K ! GR|K).

Need toup thislemmas

add a third lemma here that says that you can buildPRG graph for n channel uses iteratively and that thesecontinue to hold for any n – i.e. right now in Example

2 you put one of the main points of the paper, butan example!

Exampleto obtain2 as in [

it is just

1. This example shows how the PRG can be usedGX|Y,YR

and GYR|Ki, needed to solve Problem

?] and Theorem 1 and its Corollary 2. This isaccomplished by showing that Lemma 4 and PRG GX,YR|Yyields GX|Y,YR

(from which we may obtain its independentsets Ki), and that the PRG GX,YR|Yrelaying

Cs(

and Lemma 5 yield thecompression graphs G

(1)R|Ki

.

TABLE IONDITIONAL PROBABILITY MASS FUNCTION: p(y|x, yR), WHERE

p(y|x, yR)) = ⇤ WHEN p(y|x, yR) > 0 AND 0 ELSE.

s(p(y, yR|x))YR YR YR

1 2 3 1 2 3 1 2 3

Y12

⇤ ⇤ 00 ⇤ 0

0 ⇤ 00 0 ⇤

0 0 ⇤⇤ 0 0

X = 1 X = 2 X = 3

Givenshown in

the conditional probability mass function p(y|x, yR)Table I, we obtain the PRG GX,YR|Y (shown in

Figure 2) using the definition in Section III-A. Note that thereare nine edges, seven of them are of the solid-edge type,two are of the dotted-edge type (denoted by d) and one isa confusable-edge (denoted by c).

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

cd

GX,YR,|Y

Fig. 2. Construction of the PRG GX,YR|Y

Using Lemma 4, it is easy to generate graph Hc, the px-homomorphic to PRG confusability graph GX|YR,Y , as shownin Figure 3.

(3,1) (1,2)

(2,3) (1,1) (2,2)

c

(3,3)

Graph Hc

1 2

3

}

,

Graph GX|Y,YR

Fig. 3. Construction of Graph Hc and its confusiblity graph GX|Y,YR.

There are two independent sets of GX|Y,YR: K1 = {1, 3

and K2 = {2, 3}. From these, it is possible to generate fromthe PRG, the two compression graphs GYR|Ki

, for i = 1, 2by means of Lemma 5, shown in Fig. 4.

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

Graph Hs|K1

1 2

3

Graph GR|K1

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

Graph Hs|K2

1 2

3

Graph GR|K2

Fig. 4. Construction of Graph H2|Ki for i = 1, 2 and correspondingcompression graphs GR|Ki

Example 2. This example shows how one may use the specialstrong product to build up the PRG for any n incrementally

4

Then the following is easy to see:

Lemma 4. Graph Hc is px-homomorphic to GX|Y,YR, where

px is a projection map px : V (GX,YR|Y ) ! Vx defined aspx(x, yR) = x.

For an independent set K ✓ Vx(GX|Y,YR), let Hs|K denote

the subgraph with V (Hs|K) = {(x, yR) 2 V : x 2 K}, andE(Hs|K) = {(x, yR)(x0, y0

R) 2 Es : x 2 K}.

Lemma 5. Given independent set K for confusability graphGX|Y,YR

, graph Hs|K is pyR-homomorphic to the relaying

compression graph GR|K, where pyRis a projection map is

there a conditioning on K missing somewhere in this projec-tion map? I think not, just want to be sure pyR

: V ! VyR

defined as pyR(x, yR) = yR.

Proof. According to the definition of Hs|K, given indepen-dent set K, an edge e 2 E(Hs|K) means that there existx, x0, yR, y0

R such that e := (x, yR)(x0, y0R) 2 Es. Hence,

thereUsing(pyR

(x, ygiven K,Hs|K !

Needup thislemmas2 you putan example!

Example

exists a y 2 Y such that p(y|x, yR) · p(y|x0, y0R) > 0.

this, it is easy to verify that e 2 E(Hs|K) impliesR), pyR

(x0, y0R)) = (yR, y0

R) 2 E(GR|K). Thus,graph Hs|K is homomorphic to GR|K. ??? (pyR

:GR|K).

to add a third lemma here that says that you can buildPRG graph for n channel uses iteratively and that thesecontinue to hold for any n – i.e. right now in Example

one of the main points of the paper, but it is just

1. This example showsto obtain2 as inaccomplishedyields Gsets Ki),relaying

Cs

how the PRG can be usedGX|Y,YR

and GYR|Ki, needed to solve Problem

[?] and Theorem 1 and its Corollary 2. This isby showing that Lemma 4 and PRG GX,YR|Y

X|Y,YR(from which we may obtain its independent

and that the PRG GX,YR|Y and Lemma 5 yield thecompression graphs G

(1)R|Ki

.

TABLE IONDITIONAL PROBABILITY MASS FUNCTION: p(y|x, yR), WHERE

(p(y|x, yR)) = ⇤ WHEN p(y|x, yR) > 0 AND 0 ELSE.

s(p(y, yR|x))YR YR YR

1 2 3 1 2 3 1 2 3

Y12

⇤ ⇤ 00 ⇤ 0

0 ⇤ 00 0 ⇤

0 0 ⇤⇤ 0 0

X = 1 X = 2 X = 3

Given the conditional probability mass function p(y|x, yR)shown in Table I, we obtain the PRG GX,YR|Y (shown inFigure 2) using the definition in Section III-A. Note that thereare nine edges, seven of them are of the solid-edge type,two are of the dotted-edge type (denoted by d) and one isa confusable-edge (denoted by c).

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

cd

GX,YR,|Y

Fig. 2. Construction of the PRG GX,YR|Y

Using Lemma 4, it is easy to generate graph Hc, the px-homomorphic to PRG confusability graph GX|YR,Y , as shownin Figure 3.

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

c

Graph Hc

1 2

3

}

Graph GX|Y,YR

Fig. 3. Construction of Graph Hc and its confusiblity graph GX|Y,YR.

There are two independent sets of GX|Y,YR: K1 = {1, 3

and K2 = {2, 3}. From these, it is possible to generate fromthe PRG, the two compression graphs GYR|Ki

, for i = 1, 2,by means of Lemma 5, shown in Fig. 4.

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

Graph Hs|K1

1 2

3

Graph GR|K1

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

Graph Hs|K2

1 2

3

Graph GR|K2

Fig. 4. Construction of Graph H2|Ki for i = 1, 2 and correspondingcompression graphs GR|Ki

Example 2. This example shows how one may use the specialstrong product to build up the PRG for any n incrementally

Confusability graph GX|Y,YR

Lemma 2

Relaying compression graph GR|KiLemma 3

Primitive relay graph (PRG) GX,YR|Y

4

Then the following is easy to see:

Lemma 4. Graph Hc is px-homomorphic to GX|Y,YR, where

px is a projection map px : V (GX,YR|Y ) ! Vx defined aspx(x, yR) = x.

For an independent set K ✓ Vx(GX|Y,YR), let Hs|K denote

the subgraph with V (Hs|K) = {(x, yR) 2 V : x 2 K}, andE(Hs|K) = {(x, yR)(x0, y0

R) 2 Es : x 2 K}.

Lemma 5. Given independent set K for confusability graphGX|Y,YR

, graph Hs|K is pyR-homomorphic to the relaying

compression graph GR|K, where pyRis a projection map is

there a conditioning on K missing somewhere in this projec-tion map? I think not, just want to be sure pyR

: V ! VyR

defined as pyR(x, yR) = yR.

Proof. According to the definition of Hs|K, given indepen-dent set K, an edge e 2 E(Hs|K) means that there existx, x0, yR, y0

R such that e := (x, yR)(x0, y0R) 2 Es. Hence,

there exists a y 2 Y such that p(y|x, yR) · p(y|x0, y0R) > 0.

Using this, it is easy to verify that e 2 E(Hs|K) implies(pyR

(x, yR), pyR(x0, y0

R)) = (yR, y0R) 2 E(GR|K). Thus,

given K, graph Hs|K is homomorphic to GR|K. ??? (pyR:

Hs|K ! GR|K).

Need to add a third lemma here that says that you can buildup this PRG graph for n channel uses iteratively and that theselemmas continue to hold for any n – i.e. right now in Example2 you putan

to

one of the main points of the paper, but it is justexample!

Example 1. This example shows how the PRG can be usedobtain GX|Y,YR

and GYR|Ki

2 as, needed to solve Problem

in [?] and Theorem 1 and its Corollary 2. This isaccomplished by showing that Lemma 4 and PRG GX,YR|Yyields GX|Y,YR

(from which we may obtainsets

C

its independentKi), and that the PRG GX,YR|Y and Lemma 5 yield the

relaying compression graphs G(1)R|Ki

.

TABLE IONDITIONAL PROBABILITY MASS FUNCTION: p(y|x, yR), WHERE

s(p(y|x, yR)) = ⇤ WHEN p(y|x, yR) > 0 AND 0 ELSE.

s(p(y, yR|x))YR YR YR

1 2 3 1 2 3 1 2 3

Y12

⇤ ⇤ 00 ⇤ 0

0 ⇤ 00 0 ⇤

0 0 ⇤⇤ 0 0

X = 1 X = 2 X = 3

shownFigurearetwoa

Given the conditional probability mass function p(y|x, yR)in Table I, we obtain the PRG GX,YR|Y (shown in2) using the definition in Section III-A. Note that there

nine edges, seven of them are of the solid-edge type,are of the dotted-edge type (denoted by d) and one is

confusable-edge (denoted by c).

4

Then the following is easy to see:

Lemma 4. Graph Hc is px-homomorphic to GX|Y,YR, where

px is a projection map px : V (GX,YR|Y ) ! Vx defined aspx(x, yR) = x.

For an independent set K ✓ Vx(GX|Y,YR), let Hs|K denote

the subgraph with V (Hs|K) = {(x, yR) 2 V : x 2 K}, andE(Hs|K) = {(x, yR)(x0, y0

R) 2 Es : x 2 K}.

Lemma 5. Given independent set K for confusability graphGX|Y,YR

, graph Hs|K is pyR-homomorphic to the relaying

compression graph GR|K, where pyRis a projection map is

there a conditioning on K missing somewhere in this projec-tion map? I think not, just want to be sure pyR

: V ! VyR

defined as pyR(x, yR) = yR.

Proof. According to the definition of Hs|K, given indepen-dent set K, an edge e 2 E(Hs|K) means that there existx, x0, yR, y0

R such that e := (x, yR)(x0, y0R) 2 Es. Hence,

there exists a y 2 Y such that p(y|x, yR) · p(y|x0, y0R) > 0.

Using this, it is easy to verify that e 2 E(Hs|K) implies(pyR

(x, yR), pyR(x0, y0

R)) = (yR, y0R) 2 E(GR|K). Thus,

given K, graph Hs|K is homomorphic to GR|K. ??? (pyR:

Hs|K ! GR|K).

Need to add a third lemma here that says that you can buildup this PRG graph for n channel uses iteratively and that theselemmas continue to hold for any n – i.e. right now in Example2 you put one of the main points of the paper, but it is justan example!

Example 1. This example shows how the PRG can be usedto obtain GX|Y,YR

and GYR|Ki, needed to solve Problem

2 as in [?] and Theorem 1 and its Corollary 2. This isaccomplished by showing that Lemma 4 and PRG GX,YR|Yyields GX|Y,YR

(from which we may obtain its independentsets Ki), and that the PRG GX,YR|Y and Lemma 5 yield therelaying compression graphs G

(1)R|Ki

.

TABLE ICONDITIONAL PROBABILITY MASS FUNCTION: p(y|x, yR), WHERE

s(p(y|x, yR)) = ⇤ WHEN p(y|x, yR) > 0 AND 0 ELSE.

s(p(y, yR|x))YR YR YR

1 2 3 1 2 3 1 2 3

Y12

⇤ ⇤ 00 ⇤ 0

0 ⇤ 00 0 ⇤

0 0 ⇤⇤ 0 0

X = 1 X = 2 X = 3

Given the conditional probability mass function p(y|x, yR)shown in Table I, we obtain the PRG GX,YR|Y (shown inFigure 2) using the definition in Section III-A. Note that thereare nine edges, seven of them are of the solid-edge type,two are of the dotted-edge type (denoted by d) and one isa confusable-edge (denoted by c).

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

cd

GX,YR,|Y

Fig. 2. Construction of the PRG GX,YR|Y

Using Lemma 4, it is easy to generate graph Hc, the px-homomorphic to PRG confusability graph GX|YR,Y , as shownin Figure 3.

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

c

Graph Hc

1 2

3

Graph GX|Y,YR

Fig. 3. Construction of Graph Hc and its confusiblity graph GX|Y,YR.

There are two independent sets of GX|Y,YR: K1 = {1, 3}

and K2 = {2, 3}. From these, it is possible to generate fromthe PRG, the two compression graphs GYR|Ki

, for i = 1, 2,by means of Lemma 5, shown in Fig. 4.

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

Graph Hs|K1

1 2

3

Graph GR|K1

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

Graph Hs|K2

1 2

3

Graph GR|K2

Fig. 4. Construction of Graph H2|Ki for i = 1, 2 and correspondingcompression graphs GR|Ki

Example 2. This example shows how one may use the specialstrong product to build up the PRG for any n incrementally

4

Then the following is easy to see:

Lemma 4. Graph Hc is px-homomorphic to GX|Y,YR, where

px is a projection map px : V (GX,YR|Y ) ! Vx defined aspx(x, yR) = x.

For an independent set K ✓ Vx(GX|Y,YR), let Hs|K denote

the subgraph with V (Hs|K) = {(x, yR) 2 V : x 2 K}, andE(Hs|K) = {(x, yR)(x0, y0

R) 2 Es : x 2 K}.

Lemma 5. Given independent set K for confusability graphGX|Y,YR

, graph Hs|K is pyR-homomorphic to the relaying

compression graph GR|K, where pyRis a projection map is

there a conditioning on K missing somewhere in this projec-tion map? I think not, just want to be sure pyR

: V ! VyR

defined as pyR(x, yR) = yR.

Proof. According to the definition of Hs|K, given indepen-dent set K, an edge e 2 E(Hs|K) means that there existx, x0, yR, y0

R such that e := (x, yR)(x0, y0R) 2 Es. Hence,

there exists a y 2 Y such that p(y|x, yR) · p(y|x0, y0R) > 0.

Using this, it is easy to verify that e 2 E(Hs|K) implies(pyR

(x, yR), pyR(x0, y0

R)) = (yR, y0R) 2 E(GR|K). Thus,

given K, graph Hs|K is homomorphic to GR|K. ??? (pyR:

Hs|K ! GR|K).

Need to add a third lemma here that says that you can buildup this PRG graph for n channel uses iteratively and that theselemmas continue to hold for any n – i.e. right now in Example2 you put one of the main points of the paper, but it is justan example!

Example 1. This example shows how the PRG can be usedto obtain GX|Y,YR

and GYR|Ki, needed to solve Problem

2 as in [?] and Theorem 1 and its Corollary 2. This isaccomplished by showing that Lemma 4 and PRG GX,YR|Yyields GX|Y,YR

(from which we may obtain its independentsets Ki), and that the PRG GX,YR|Y and Lemma 5 yield therelaying compression graphs G

(1)R|Ki

.

TABLE ICONDITIONAL PROBABILITY MASS FUNCTION: p(y|x, yR), WHERE

s(p(y|x, yR)) = ⇤ WHEN p(y|x, yR) > 0 AND 0 ELSE.

s(p(y, yR|x))YR YR YR

1 2 3 1 2 3 1 2 3

Y12

⇤ ⇤ 00 ⇤ 0

0 ⇤ 00 0 ⇤

0 0 ⇤⇤ 0 0

X = 1 X = 2 X = 3

Given the conditional probability mass function p(y|x, yR)shown in Table I, we obtain the PRG GX,YR|Y (shown inFigure 2) using the definition in Section III-A. Note that thereare nine edges, seven of them are of the solid-edge type,two are of the dotted-edge type (denoted by d) and one isa confusable-edge (denoted by c).

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

cd

GX,YR,|Y

Fig. 2. Construction of the PRG GX,YR|Y

Using Lemma 4, it is easy to generate graph Hc, the px-homomorphic to PRG confusability graph GX|YR,Y , as shownin Figure 3.

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

c

Graph Hc

1 2

3

Graph GX|Y,YR

Fig. 3. Construction of Graph Hc and its confusiblity graph GX|Y,YR.

There are two independent sets of GX|Y,YR: K1 = {1, 3}

and K2 = {2, 3}. From these, it is possible to generate fromthe PRG, the two compression graphs GYR|Ki

, for i = 1, 2,by means of Lemma 5, shown in Fig. 4.

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

Graph Hs|K1

1 2

3

Graph GR|K1

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

Graph Hs|K2

1 2

3

Graph GR|K2

Fig. 4. Construction of Graph H2|Ki for i = 1, 2 and correspondingcompression graphs GR|Ki

Example 2. This example shows how one may use the specialstrong product to build up the PRG for any n incrementally

4

Then the following is easy to see:

Lemma 4. Graph Hc is px-homomorphic to GX|Y,YR, where

px is a projection map px : V (GX,YR|Y ) ! Vx defined aspx(x, yR) = x.

For an independent set K ✓ Vx(GX|Y,YR), let Hs|K denote

the subgraph with V (Hs|K) = {(x, yR) 2 V : x 2 K}, andE(Hs|K) = {(x, yR)(x0, y0

R) 2 Es : x 2 K}.

Lemma 5. Given independent set K for confusability graphGX|Y,YR

, graph Hs|K is pyR-homomorphic to the relaying

compression graph GR|K, where pyRis a projection map is

there a conditioning on K missing somewhere in this projec-tion map? I think not, just want to be sure pyR

: V ! VyR

defined as pyR(x, yR) = yR.

Proof. According to the definition of Hs|K, given indepen-dent set K, an edge e 2 E(Hs|K) means that there existx, x0, yR, y0

R such that e := (x, yR)(x0, y0R) 2 Es. Hence,

there exists a y 2 Y such that p(y|x, yR) · p(y|x0, y0R) > 0.

Using this, it is easy to verify that e 2 E(Hs|K) implies(pyR

(x, yR), pyR(x0, y0

R)) = (yR, y0R) 2 E(GR|K). Thus,

given K, graph Hs|K is homomorphic to GR|K. ??? (pyR:

Hs|K ! GR|K).

Need to add a third lemma here that says that you can buildup this PRG graph for n channel uses iteratively and that theselemmas continue to hold for any n – i.e. right now in Example2 you put one of the main points of the paper, but it is justan example!

Example 1. This example shows how the PRG can be usedto obtain GX|Y,YR

and GYR|Ki, needed to solve Problem

2 as in [?] and Theorem 1 and its Corollary 2. This isaccomplished by showing that Lemma 4 and PRG GX,YR|Yyields GX|Y,YR

(from which we may obtain its independentsets Ki), and that the PRG GX,YR|Y and Lemma 5 yield therelaying compression graphs G

(1)R|Ki

.

TABLE ICONDITIONAL PROBABILITY MASS FUNCTION: p(y|x, yR), WHERE

s(p(y|x, yR)) = ⇤ WHEN p(y|x, yR) > 0 AND 0 ELSE.

s(p(y, yR|x))YR YR YR

1 2 3 1 2 3 1 2 3

Y12

⇤ ⇤ 00 ⇤ 0

0 ⇤ 00 0 ⇤

0 0 ⇤⇤ 0 0

X = 1 X = 2 X = 3

Given the conditional probability mass function p(y|x, yR)shown in Table I, we obtain the PRG GX,YR|Y (shown inFigure 2) using the definition in Section III-A. Note that thereare nine edges, seven of them are of the solid-edge type,two are of the dotted-edge type (denoted by d) and one isa confusable-edge (denoted by c).

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

cd

GX,YR,|Y

Fig. 2. Construction of the PRG GX,YR|Y

Using Lemma 4, it is easy to generate graph Hc, the px-homomorphic to PRG confusability graph GX|YR,Y , as shownin Figure 3.

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

c

Graph Hc

1 2

3

Graph GX|Y,YR

Fig. 3. Construction of Graph Hc and its confusiblity graph GX|Y,YR.

There are two independent sets of GX|Y,YR: K1 = {1, 3}

and K2 = {2, 3}. From these, it is possible to generate fromthe PRG, the two compression graphs GYR|Ki

, for i = 1, 2,by means of Lemma 5, shown in Fig. 4.

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

Graph Hs|K1

1 2

3

Graph GR|K1

(3,1) (1,2) (3,3)

(2,3) (1,1) (2,2)

Graph Hs|K2

1 2

3

Graph GR|K2

Fig. 4. Construction of Graph H2|Ki for i = 1, 2 and correspondingcompression graphs GR|Ki

Example 2. This example shows how one may use the specialstrong product to build up the PRG for any n incrementally

Confusability graphPrimitive relay graph (PRG) GX|Y,YR

Lemma 4

GR|Ki

Relaying compression graphs

Lemma 5

GX,YR|Y

Fig. 2. Construction of the PRG GX,YR|Y

Using Lemma 4, it is easy to generate graph Hc, the px-homomorphic to PRG confusability graph GX|YR,Y , as shownin Figure ??.

There are two independent sets of GX|Y,YR: K1 = {1, 3}

and K2 = {2, 3}. From these, it is possible to generate fromthe PRG, the two compression graphs GYR|Ki

, for i = 1, 2,by means of Lemma 5, shown in Fig. ??.

Example 2. This example shows how one may use the special

forof

proba-for

strong product to build up the PRG for any n incrementallyfrom the PRG for n � 1. We obtain the adjacency matrixthe PRG for any n as a function of the adjacency matrixthe PRG for n = 1.

This particular example uses the channel transitionbility matrix presented in Table II. The adjacency matrixthe PRG generated by this channel is:

TABLE IICONDITIONAL PROBABILITY MASS FUNCTION: p(y|x, yR), WHERE

kXk = 2 , kYRk = 4 AND kYk = 3.

s(p(y, yR|x))YR YR

1 2 3 4 1 2 3 4

Y123

⇤ 0 ⇤ 00 0 0 0⇤ 0 ⇤ 0

0 ⇤ 0 ⇤0 0 0 00 ⇤ 0 ⇤

X = 1 X = 2

A(1)GX,YR|Y

=

YR = 1 YR = 2 YR = 3 YR = 4264

375

� d s s YR = 1

d � s s YR = 2

s s � d YR = 3

s s d � YR = 4

(3)Then, A

(n)GX,YR|Y

can be computed for any n, by performingthe operation: Isn’t this supposed to be a lemma? I think so,

Conditional probability mass func-tion: p(y|x, yR), where s(p(y|x, yR)) =⇤ when p(y|x, yR) > 0 and 0 else.

Section III A

Fig. 2. Construction of the PRG GX,YR|Y from a probability mass functionp(y, yR|x) for a primitive relay channel, and how to obtain graphs Hc andGX|Y,YR

(Lemma 4) and graphs Hs|Kiand associated GR|Ki

.

Example 2. This example shows how to use Lemma 4 toobtain the PRG iteratively for any n. Consider the channeltransition probability matrix presented in Table I. The adja-cency matrix for the PRG GX,YR|Y is (5).

TABLE ICONDITIONAL PROBABILITY MASS FUNCTION: p(y|x, yR), WHERE

s(p(y|x, yR)) = ∗ WHEN p(y|x, yR) > 0 AND 0 ELSE.

s(p(y, yR|x)) YR YR

1 2 3 4 1 2 3 4

Y123

∗ 0 ∗ 00 0 0 0∗ 0 ∗ 0

0 ∗ 0 ∗0 0 0 00 ∗ 0 ∗

X = 1 X = 2

A(1)GX,YR|Y

=

(1, 1) (1, 3) (2, 2) (2, 4)

0 d s s (1, 1)

d 0 s s (1, 3)

s s 0 d (2, 2)

s s d 0 (2, 4)

(5)

Then, A(n)GX,YR|Y

can be computed for any n, according toLemma 4, and in particular Equation (4). We use the shortnotation A′ to refer to

(A

(n−1)GX,YR|Y

+ ∆I)

. For this example,

given the adjacency matrix (5), A(n)GX,YR|Y

is given by

A(n)GX,YR|Y

=

A′ dA′ sA′ sA′dA′ A′ sA′ sA′sA′ sA′ A′ dA′sA′ sA′ dA′ A′

−∆I (6)

We notice in the above that only diagonal elements are sub-tracted (with the convention that s−0 = s, c−0 = c, d−0 = dand ∆−∆ = 0), and that the diagonal of the above matrix isindeed zero.

Once the PRG graph is obtained for a particular n, it ispossible to generate the confusability graph G

(n)X|Y,YR

and allthe corresponding compression graphs for each independentset Ki found in this graph. The benefit comes from having arecursive, compact way to build up the n-fold graphs needed.As a future potential benefit still under exploration, we believethis graph and SSP may point out the structure of optimalrelaying / compression schemes.

IV. USING THE PRG TO SOLVE FOR r∗(n)z

In this section we show how the SSP can be used toobtain the n-shot relaying compression graph G

(n)R|K for any

independent set K(n) when n ≥ 1, hence solving Problem1 more easily than in [1] due to the smaller, and recursivelydefined PRG needed to compute the quantities in Corollary 1.

As an aside before tackling general alphabets, for binaryprimitive relay channels where ||X || = ||YR|| = Y|| = 2,we are able to show the following. A binary PRC is calledtrivial if α(G(Xn|Y n, Y n

R )) = 1 (the capacity of PRC is zero)or α(G(Xn|Y n)) = 2n (relaying is useless). The followingproposition show and the definition of non-trivial PRCs implythat for binary alphabets and any n, the relay is either not usedat all, or simply forwards its received signal, in order to beoptimal in the sense of Problem 1.

Proposition 5. For any non-trivial binary PRC channel, nocompression is possible, i.e. r∗(n)

z = 1 for any n.

Page 5: A relaying graph and special strong product for zero-error ...

Algorithm 1 SSP Algorithm for obtaining r∗(n)z

1: Input: p(y|yR, x)

2: Output: r∗(i)z , and R(i)z for 1 ≤ i ≤ n.

3: Derive PRG GX,YR|Y corresponding to p(y|yR, x)4: for (i = 1 to i ≤ n) do5: Initialize r(i)

z = i log |YR| and R(i)z = log |X |.

6: Compute the i−fold SSP(GX,YR|Y

)�i

7: Obtain GXi|Y i,Y iR

= px(Hc(GX,YR|Y )�i

)

8: for each max ind. set Ki of GXi|Y i,Y iR

do9: Compute G(i)

R |K(i) = pyR

(Hs(GK(i),YR|Y )�i)

)

10: if 1i logχ(G

(i)R |K(i)) < r

(i)z then

11: r(i)z = 1

i logχ(G(i)R |K(i)), R

(i)z = 1

i log |K(i)|.12: end if13: end for14: end for

Algorithm 1 shows a greedy approach to obtain r∗(n)z for

any finite n. It mimics that in [1], but instead of computingGXi|Y i,Y i

Rand G

(i)R |K(i) directly, they are obtained more

easily from projections of the PRG (via Lemmas 2 and3). The benefit is that G�i

X,YR|Y is obtained as a special

strong product, whereas G(i)R |K(i) must be obtained directly

from the channel for each i and for each K(i) indepen-dent set of GXi|Y i,Y i

R. The computation of G�i

X,YR|Y re-quires about the same computational complexity as a conven-tional strong product and may be iteratively computed. Forany i, GXi|Y i,Y i

R= px

(Hc(GX,YR|Y )�i

)and G

(i)R |K(i) =

pyR

(Hs(GK(i),YR|Y )�i)

)may be obtained from G�i

X,YR|Y bysimple projections. The complexity is thus reduced, and thestructure of the problem is simplified.

We next obtain a lower bound on r∗(n)z directly from the

PRG, avoiding the need to repeatedly project.

Proposition 6. Given an independent set K(n) of GXn|Y n,Y nR

,Let Hs(GK(n),YR|Y )�n be the PRG restricted to input setK(n). Then, r∗(n)

z ≥ χ(H2(GK(n),YR|Y )�n

).

Example 3. This example illustrates a channel for whichr∗(2)z < r

∗(1)z , and for which relaying compression graph

GR|K �GR|K is a strict subset of G(2)R |K×K. This illustrates

the need for a new special strong product and that the relayingcompression graph does not obey the (regular) strong product.Instead, we can use the method presented here to compute theSSP of the PRG, and then generate the compression graph forany n using homomorphic projections, as in Section III-C.

Consider the channel transition probability matrix and itsPRG shown in Fig. 3. Note that dashed boundaries havebeen drawn to identify node-pairs with the same inputs. Sincethere are no confusable edges, the maximal independent set ofGX|Y,YR

is the input set itself K = {1, 2}. We can constructGR|K by performing the homomorphic projection as in SectionIII-C and obtain the pentagon-shaped compression graph,which is minimally colored using 3 colors, and consequently,r∗(1)z = log2(3) = 1.5850 bits.

Fig. 3. Conditional probability mass function and its associated PRG.

Moreover, to illustrate how this method can be used forn ≥ 1, consider n = 2. In particular, K(2) = K × K is themaximal independent set of of GX2|Y 2,Y 2

R. The compression

graph G(2)R |K×K next to its corresponding adjacency matrix

(black squares denote the presence of an edge, white theabsence of an edge) are shown in Fig. 4.

Fig. 4. Compression Graph G(2)R |K×K for n = 2 and its adjacency matrix

(center) and the adjacency matrix of the traditional strong product (right).

In particular, coloring G(2)R |K×K requires 7 colors, and so

r∗(2)z = 1

2 log2(7) = 1.4037 bits. Next, we can visually observethat GR|K � GR|K ⊂ G

(2)R |K×K It is clear that there are

several edges that are not captured by simple performing thestrong product, motivating the SSP developed.

V. CONCLUSION

The new primitive relaying graph and a special strongproduct allows one to iteratively obtain the graphs neededto solve Problem 1. This amounts to computational savings(though we cannot get around needing to compute maximalindependent sets and minimal colorings in general), but alsofurther insight into the structure of optimal (sense of Problem1) relaying schemes. In ongoing work, we have been able touse the PRG to compute a lower bound on a complementaryproblem, finding C(n)

z (k) for a fixed R-D link rate k.

REFERENCES

[1] Y. Chen and N. Devroye, “Zero-error relaying for primitive relay chan-nels,” IEEE Trans. Inf. Theory, vol. 63, no. 12, pp. 7708–7715, Dec.2017.

[2] T. Cover, Open Problems in Communication and computation. Springer-Verlag, 1987, ch. The capacity of the relay channel, pp. 72–73.

[3] Y. Kim, “Coding techniques for primitive relay channels,” in Proc.Forty-Fifth Annual Allerton Conf. Commun., Contr. Comput, 2007.

[4] X. Wu, L. P. Barnes, and A. Ozgur, “Cover’s open problem: “The capacityof the relay channel”,” https://arxiv.org/abs/1701.02043, 2017.

[5] X. Wu and A. Ozgur, “Cut-set bound is loose for gaussian relaynetworks,” arXiv:1608.08540v2, 2016.

[6] R. Hammack, W. Imrich, and S. Klavzar, Handbook of Product Graphs,Second Edition, 2nd ed. Boca Raton, FL, USA: CRC Press, Inc., 2011.

Page 6: A relaying graph and special strong product for zero-error ...

APPENDIX

A. Proof of Lemma 2

Proof. By definition of Hc, an edge e ∈ E(Hc) means thatthere exist x, x′, yR such that e := (x, yR)(x′, yR) ∈ Ec.Hence, there exist y ∈ Y such that

p(y|x, yR) · p(y|x′, yR) > 0. (7)

Using inequality (7) and the definition of px : Hc → GX|Y,YR,

we see that e ∈ E[Hc] implies

(px(x, yR), px(x′, yr)) = (x, x′) ∈ E(GX|Y,YR).

Thus, Hc is px-homomorphic to GX|Y,YR.

B. Proof of Lemma 3

Proof. By definition of Hs|K, given independent set K, anedge e ∈ E(Hs|K) means that there exist x, x′, yR, y′R suchthat e := (x, yR)(x′, y′R) ∈ Es. Hence, there exists a y ∈ Ysuch that p(y|x, yR) · p(y|x′, y′R) > 0. Using this and thedefinition of pyR

: Hs|K → GR|K, we see that e ∈ E(Hs|K)implies (pyR

(x, yR), pyR(x′, y′R)) = (yR, y

′R) ∈ E(GR|K).

Thus, given K, graph Hs|K is pyR-homomorphic to GR|K.

C. Proof of Lemma 4

We prove (3) for n = 2. The proof for larger n followsanalogously.

Let E(2) denote the edge set of the two-shot PRG. We needto show that every edge in E(2) = E

(2)s ∪ E(2)

d ∪ E(2)c is

generated from the SSP between elements in E(1) = E(1)s ∪

E(1)d ∪E

(1)c and vice-versa, that any edge created by the SSP

of two elements in E(1) lies in E(2). Since E(2)s , E

(2)d and

E(2)c are disjoint, we will check the proof for each edge set

separately, and note that all statements are bi-directional, or ifand only if statements.

Edge in E(2)c : Note that similar to the confusable edge

set definition E(1)c , (x1x2, yR1

yR2)(x′1x

′2, yR1

yR2) ∈ E(2)

c ifx1x2 6= x′1x

′2 and there exists a y1y2 such that

p(y1y2|x1x2, yR1yR2

) · p(y1y2|x′1x′2, yR1yR2

) > 0. (8)

Equation (8) can be further factored as

p(y1|x1, yR1) · p(y2|x2, yR2

) > 0

p(y1|x′1, yR1) · p(y2|x′2, yR2

) > 0. (9)

Since x1x2 6= x′1x′2, the confusable edge

(x1x2, yR1yR2

)(x′1x′2, yR1

yR2) ∈ E

(2)c is obtained in

three possible ways. i) When x1 = x′1 and x2 6= x′2:This case happens when a vertex in (x1, yR1) ∈ V (1)

1 (because p(y1|x1, yR1) > 0) is combined (in thesense of multiplication using the SSP table) with aconfusable edge (x2, yR2

)(x′2, yR2) ∈ E

(1)c (because

p(y2|x2, yR2) · p(y2|x′2, yR2

) > 0). Using the right hand sideof equation (3), and the SSP multiplication table this casecorresponds to ∆ � c = c. ii) When x1 6= x′1 and x2 = x′2:

1For sake of simplicity a vertex alone in this computation can be consideredas a edge of type ∆.

this case is similar to the previous one and is equivalent toc�∆ = c. iii) When x1 6= x′1 and x2 6= x′2: This case happensby combining two confusable edges (xi, yRi

)(x′i, yRi) ∈ E(1)

c

for both i = 1, 2 (because p(yi|xi, yRi) · p(yi|x′i, yRi

) > 0).Using the right hand side of equation (3), and the SSPmultiplication table this case corresponds to c� c = c.

Edge in E(2)d : Note that (x1x2, yR1

yR2)(x1x2, y

′R1y′R2

) ∈E

(2)d if yR1

yR26= y′R1

y′R2and there exists a y1y2 such that

p(y1y2|x1x2, yR1yR2) · p(y1y2|x1x2, y′R1y′R2

) > 0 (10)

Using the definition of Ed, and a similar approach as above(for E(2)

c edges), we can show that an edge in E(2)d is obtained

by one of the three SSP multiplications (∆ � d, or d� ∆, ord� d).

Edge in E(2)s : Note that (x1x2, yR1

yR2)(x′1x

′2, y′R1y′R2

) ∈E

(2)s if yR1

yR26= y′R1

y′R2, x1x2 6= x′1x

′2 and there exists a

y1y2 such that

p(y1y2|x1x2, yR1yR2

) · p(y1y2|x′1x′2, y′R1y′R2

) > 0 (11)

Given x1x2 6= x′1x′2 and yR1

yR26= y′R1

y′R2, there are 9

different possibilities that might occur depending on whether(x1, x

′1), (x2, x

′2), (yR1 , y

′R1

) and (yR2 , y′R2

) are equal or not.Most of these cases can be checked in a way similar tothose for the dotted and confusable edges (last 2 cases). Oneinteresting case is when x1 = x′1, x2 6= x′2 and yR1

6=yR2

, y′R1= y′R2

. This case implies (x1, yR1)(x1, y

′R1

) ∈ E(1)d

and (x2, yR2)(x′2, yR2

) ∈ E(1)c . Using the SSP multiplication

table it is clear that d� c = s. The other cases can be derivedsimilarly.

Adjacency matrix equivalency: The second part of theproof follows from the fact that the adjacency matrix of the(regular) strong product is obtained as (AΓ +I)⊗(A∆ +I)−I.Our special strong product is of a very similar form, withthe identity matrix part to ∆I and multiplication replaced bythe SSP multiplication table. This result follows directly fromthe standard strong product adjacency matrix equivalency.Consider two PRG’s G and H , with PRG-adjacency matricesAG and AH respectively. The matrix AG + ∆I correspondsto the PRG adjacency or equality matrix of G, whose entry(AG +∆I)u,v , takes a specific value depending on the type ofadjacency of nodes u and v, that is, values from {s, d, c, 0},and takes value of ∆ for u = v. The same follows for theelements (AH + ∆I)u′,v′ .Now, note that the Kronecker product (AG+∆I)⊗(AH +∆I)forms a matrix whose entries at ((u, u′), (v, v′)) will take val-ues from {∆, s, d, c, 0} following the SSP multiplication table,depending on the values of (AG+∆I)u,v and (AH +∆I)u′,v′ .This translates into entries that take values from {s, d, c} ifvertices u, v have some type of adjacency, and vertices u′, v′

have some type of adjacency, and value of ∆ if they arecorrespondingly equal. In the other cases (no adjacency) theentries are zero. Note moreover, that we still need to subtractthe term ∆I from the product (AG+∆I)⊗(AH+∆I), since thediagonal entries of this matrix ((u, u′), (u, u′)) contains entriesof value ∆ representing the relations u = u and u′ = u′. Thisyields the expression in (4).

Page 7: A relaying graph and special strong product for zero-error ...

D. Proof of Proposition 5

Figure 5(a) and (b) shows two trivial binary channels.

(a)

(b)

5

from the PRG for n � 1. We obtain the adjacency matrix forthe PRG for any n as a function of the adjacency matrix ofthe PRG for n = 1.

This particular example uses the channel transition proba-bility matrix presented in Table II. The adjacency matrix forthe PRG generated by this channel is:

TABLE IICONDITIONAL PROBABILITY MASS FUNCTION: p(y|x, yR), WHERE

kXk = 2 , kYRk = 4 AND kYk = 3.

s(p(y, yR|x))YR YR

1 2 3 4 1 2 3 4

Y123

⇤ 0 ⇤ 00 0 0 0⇤ 0 ⇤ 0

0 ⇤ 0 ⇤0 0 0 00 ⇤ 0 ⇤

X = 1 X = 2

A(1)GX,YR|Y

=

YR = 1 YR = 2 YR = 3 YR = 4264

375

� d s s YR = 1

d � s s YR = 2

s s � d YR = 3

s s d � YR = 4

(3)Then, A

(n)GX,YR|Y

can be computed for any n, by performingthe operation: Isn’t this supposed to be a lemma? I think so,and this is where we can discuss the special Kronecker productthat uses the SSP multiplication table.

A(n)GX,YR|Y

= A(1)GX,YR|Y

⌦⇣A

(n�1)GX,YR|Y

+ �I⌘

(4)

We note that in the Kronecker product in (4), multiplicationfollows the SSP multiplication table. worth changing theKornecker product notation or not?

Fix to refer to lemma once create lemma. Equation (4)allows computing the adjacency matrix of the PRG A

(n)GX,YR|Y

based on A(1)GX,YR|Y

, which we denote in short by A(n) for anyn. In the case of this example, based on the adjacency matrixshown in (3), A(n) is given by2664

A(n�1) A(n�1) + dI A(n�1) + sI A(n�1) + sIA(n�1) + dI A(n�1) A(n�1) + sI A(n�1) + sIA(n�1) + sI A(n�1) + sI A(n�1) A(n�1) + dIA(n�1) + sI A(n�1) + sI A(n�1) + dI A(n�1)

3775

Once the PRG graph is obtained for a particular n, it ispossible to generate the confusability graph G

(n)X|Y,YR

and allthe corresponding compression graphs for each independentset Ki found in this graph. The benefit comes from having arecursive, compact way to obtain

IV. SOLVING PROBLEM 2 USING THE PRG

In this section we show how SSP can be used to obtain then-shot relaying compression graph G

(n)R|K for any independent

set K(n) when n � 1, hence solving Problem 2 more easilythan in [?] due to the smaller, and recursively defined PRGneeded to compute the relevant quantities in Corollary 2.

A. Problem 2 for ||X || = ||YR|| = Y|| = 2, binary PRCs.

As an aside before tackling general alphabets, for binaryprimitive relay channels, are are able to show the following.A binary PRC is called trivial if ↵(G(Xn|Y n, Y n

R )) = 1 (thecapacity of PRC is zero) or ↵(G(Xn|Y n)) = 2n (relay is notneeded, and maximal capacity is achieved through the perfectdirect link from X to Y). Figure 5 shows two trivial binarychannels.

entering

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ ⇤0 0

⇤ 00 ⇤

X = 1 X = 2

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ ⇤0 0

0 0⇤ ⇤

X = 1 X = 2

Fig. 5. Trivial binary PRCs – (left): ↵(G(Xn|Y n, Y nR )) = 1, (right):

↵(G(Xn|Y n)) = 2n .

Proposition 6. For any non-trivial binary PRC channel, nocompression is possible, i.e. r

⇤(n)z = 1 for any n.

Proof. Among the 28 binary PRC channel, most are trivial. Toidentify non-trivial channels restrict the number of ⇤’s in oneof the inputs (let us assume x = 1), and find all the nontrivialchannels for this case. Let N⇤

0 be the number of non-zeroelements in s(p(y, yR|x)). Thus, the nontrivial channels canbe partitioned into:

• Case (N⇤0 = 1): Among

�41

�· 24 possible channels, only�

41

�· 22 of them are non-trivial. All non-trivial channels

of this type are similar to the one is shown in Fig 6-a.• Case (N⇤

0 = 2): Among�42

�· 24 possible channels, only

12 of them are non-trivial. All non-trivial channels belongto this case are similar to one of the channels shown inFig. reffig:cases-b or Fig. reffig:cases-c.

• Case (N⇤0 = 3): Among

�43

�· 24 possible channels, only

4 of them are non-trivial. An example of non-trivialchannels of this type is shown in Fig. reffig:cases-d.

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ 00 0

0 ⇤b c

X = 1 X = 2

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ 0⇤ 0

0 a0 b

X = 1 X = 2

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ 00 ⇤

0 ab 0

X = 1 X = 2

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ 0⇤ ⇤

0 ⇤0 0

X = 1 X = 2

Fig. 6. Non-trivial binary PRC’s of case 1 (top-left), case 2 (top-right andbottom left) and case 3 (bottom right), where are nontrivial if one of the letterelements a or b or c be nonzero.

For all non-trivial channel, it is possible to show that GR|Xis fully connected, and it is known that r⇤z = log |YR| (Lemma8, [5]). Thus, no compression is possible.

B. Problem 2 for general alphabets and n using the PRG.

In the following, note that K(n) corresponds to a maximumindependent set of graph G⇥n

X|Y,YR, where n indicates the

number of channel uses. In [5], an n�shot zero-error relaying

(c) (d)

(e) (f)

5

from the PRG for n � 1. We obtain the adjacency matrix forthe PRG for any n as a function of the adjacency matrix ofthe PRG for n = 1.

This particular example uses the channel transition proba-bility matrix presented in Table II. The adjacency matrix forthe PRG generated by this channel is:

TABLE IICONDITIONAL PROBABILITY MASS FUNCTION: p(y|x, yR), WHERE

kXk = 2 , kYRk = 4 AND kYk = 3.

s(p(y, yR|x))YR YR

1 2 3 4 1 2 3 4

Y123

⇤ 0 ⇤ 00 0 0 0⇤ 0 ⇤ 0

0 ⇤ 0 ⇤0 0 0 00 ⇤ 0 ⇤

X = 1 X = 2

A(1)GX,YR|Y

=

YR = 1 YR = 2 YR = 3 YR = 4264

375

� d s s YR = 1

d � s s YR = 2

s s � d YR = 3

s s d � YR = 4

(3)Then, A

(n)GX,YR|Y

can be computed for any n, by performingthe operation: Isn’t this supposed to be a lemma? I think so,and this is where we can discuss the special Kronecker productthat uses the SSP multiplication table.

A(n)GX,YR|Y

= A(1)GX,YR|Y

⌦⇣A

(n�1)GX,YR|Y

+ �I⌘

(4)

We note that in the Kronecker product in (4), multiplicationfollows the SSP multiplication table. worth changing theKornecker product notation or not?

Fix to refer to lemma once create lemma. Equation (4)allows computing the adjacency matrix of the PRG A

(n)GX,YR|Y

based on A(1)GX,YR|Y

, which we denote in short by A(n) for anyn. In the case of this example, based on the adjacency matrixshown in (3), A(n) is given by2664

A(n�1) A(n�1) + dI A(n�1) + sI A(n�1) + sIA(n�1) + dI A(n�1) A(n�1) + sI A(n�1) + sIA(n�1) + sI A(n�1) + sI A(n�1) A(n�1) + dIA(n�1) + sI A(n�1) + sI A(n�1) + dI A(n�1)

3775

Once the PRG graph is obtained for a particular n, it ispossible to generate the confusability graph G

(n)X|Y,YR

and allthe corresponding compression graphs for each independentset Ki found in this graph. The benefit comes from having arecursive, compact way to obtain

IV. SOLVING PROBLEM 2 USING THE PRG

In this section we show how SSP can be used to obtain then-shot relaying compression graph G

(n)R|K for any independent

set K(n) when n � 1, hence solving Problem 2 more easilythan in [?] due to the smaller, and recursively defined PRGneeded to compute the relevant quantities in Corollary 2.

A. Problem 2 for ||X || = ||YR|| = Y|| = 2, binary PRCs.

As an aside before tackling general alphabets, for binaryprimitive relay channels, are are able to show the following.A binary PRC is called trivial if ↵(G(Xn|Y n, Y n

R )) = 1 (thecapacity of PRC is zero) or ↵(G(Xn|Y n)) = 2n (relay is notneeded, and maximal capacity is achieved through the perfectdirect link from X to Y). Figure 5 shows two trivial binarychannels.

entering

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ ⇤0 0

⇤ 00 ⇤

X = 1 X = 2

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ ⇤0 0

0 0⇤ ⇤

X = 1 X = 2

Fig. 5. Trivial binary PRCs – (left): ↵(G(Xn|Y n, Y nR )) = 1, (right):

↵(G(Xn|Y n)) = 2n .

Proposition 6. For any non-trivial binary PRC channel, nocompression is possible, i.e. r

⇤(n)z = 1 for any n.

Proof. Among the 28 binary PRC channel, most are trivial. Toidentify non-trivial channels restrict the number of ⇤’s in oneof the inputs (let us assume x = 1), and find all the nontrivialchannels for this case. Let N⇤

0 be the number of non-zeroelements in s(p(y, yR|x)). Thus, the nontrivial channels canbe partitioned into:

• Case (N⇤0 = 1): Among

�41

�· 24 possible channels, only�

41

�· 22 of them are non-trivial. All non-trivial channels

of this type are similar to the one is shown in Fig 6-a.• Case (N⇤

0 = 2): Among�42

�· 24 possible channels, only

12 of them are non-trivial. All non-trivial channels belongto this case are similar to one of the channels shown inFig. reffig:cases-b or Fig. reffig:cases-c.

• Case (N⇤0 = 3): Among

�43

�· 24 possible channels, only

4 of them are non-trivial. An example of non-trivialchannels of this type is shown in Fig. reffig:cases-d.

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ 00 0

0 ⇤b c

X = 1 X = 2

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ 0⇤ 0

0 a0 b

X = 1 X = 2

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ 00 ⇤

0 ab 0

X = 1 X = 2

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ 0⇤ ⇤

0 ⇤0 0

X = 1 X = 2

Fig. 6. Non-trivial binary PRC’s of case 1 (top-left), case 2 (top-right andbottom left) and case 3 (bottom right), where are nontrivial if one of the letterelements a or b or c be nonzero.

For all non-trivial channel, it is possible to show that GR|Xis fully connected, and it is known that r⇤z = log |YR| (Lemma8, [5]). Thus, no compression is possible.

B. Problem 2 for general alphabets and n using the PRG.

In the following, note that K(n) corresponds to a maximumindependent set of graph G⇥n

X|Y,YR, where n indicates the

number of channel uses. In [5], an n�shot zero-error relaying

5

from the PRG for n � 1. We obtain the adjacency matrix forthe PRG for any n as a function of the adjacency matrix ofthe PRG for n = 1.

This particular example uses the channel transition proba-bility matrix presented in Table II. The adjacency matrix forthe PRG generated by this channel is:

TABLE IICONDITIONAL PROBABILITY MASS FUNCTION: p(y|x, yR), WHERE

kXk = 2 , kYRk = 4 AND kYk = 3.

s(p(y, yR|x))YR YR

1 2 3 4 1 2 3 4

Y123

⇤ 0 ⇤ 00 0 0 0⇤ 0 ⇤ 0

0 ⇤ 0 ⇤0 0 0 00 ⇤ 0 ⇤

X = 1 X = 2

A(1)GX,YR|Y

=

YR = 1 YR = 2 YR = 3 YR = 4264

375

� d s s YR = 1

d � s s YR = 2

s s � d YR = 3

s s d � YR = 4

(3)Then, A

(n)GX,YR|Y

can be computed for any n, by performingthe operation: Isn’t this supposed to be a lemma? I think so,and this is where we can discuss the special Kronecker productthat uses the SSP multiplication table.

A(n)GX,YR|Y

= A(1)GX,YR|Y

⌦⇣A

(n�1)GX,YR|Y

+ �I⌘

(4)

We note that in the Kronecker product in (4), multiplicationfollows the SSP multiplication table. worth changing theKornecker product notation or not?

Fix to refer to lemma once create lemma. Equation (4)allows computing the adjacency matrix of the PRG A

(n)GX,YR|Y

based on A(1)GX,YR|Y

, which we denote in short by A(n) for anyn. In the case of this example, based on the adjacency matrixshown in (3), A(n) is given by2664

A(n�1) A(n�1) + dI A(n�1) + sI A(n�1) + sIA(n�1) + dI A(n�1) A(n�1) + sI A(n�1) + sIA(n�1) + sI A(n�1) + sI A(n�1) A(n�1) + dIA(n�1) + sI A(n�1) + sI A(n�1) + dI A(n�1)

3775

Once the PRG graph is obtained for a particular n, it ispossible to generate the confusability graph G

(n)X|Y,YR

and allthe corresponding compression graphs for each independentset Ki found in this graph. The benefit comes from having arecursive, compact way to obtain

IV. SOLVING PROBLEM 2 USING THE PRG

In this section we show how SSP can be used to obtain then-shot relaying compression graph G

(n)R|K for any independent

set K(n) when n � 1, hence solving Problem 2 more easilythan in [?] due to the smaller, and recursively defined PRGneeded to compute the relevant quantities in Corollary 2.

A. Problem 2 for ||X || = ||YR|| = Y|| = 2, binary PRCs.

As an aside before tackling general alphabets, for binaryprimitive relay channels, are are able to show the following.A binary PRC is called trivial if ↵(G(Xn|Y n, Y n

R )) = 1 (thecapacity of PRC is zero) or ↵(G(Xn|Y n)) = 2n (relay is notneeded, and maximal capacity is achieved through the perfectdirect link from X to Y). Figure 5 shows two trivial binarychannels.

entering

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ ⇤0 0

⇤ 00 ⇤

X = 1 X = 2

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ ⇤0 0

0 0⇤ ⇤

X = 1 X = 2

Fig. 5. Trivial binary PRCs – (left): ↵(G(Xn|Y n, Y nR )) = 1, (right):

↵(G(Xn|Y n)) = 2n .

Proposition 6. For any non-trivial binary PRC channel, nocompression is possible, i.e. r

⇤(n)z = 1 for any n.

Proof. Among the 28 binary PRC channel, most are trivial. Toidentify non-trivial channels restrict the number of ⇤’s in oneof the inputs (let us assume x = 1), and find all the nontrivialchannels for this case. Let N⇤

0 be the number of non-zeroelements in s(p(y, yR|x)). Thus, the nontrivial channels canbe partitioned into:

• Case (N⇤0 = 1): Among

�41

�· 24 possible channels, only�

41

�· 22 of them are non-trivial. All non-trivial channels

of this type are similar to the one is shown in Fig 6-a.• Case (N⇤

0 = 2): Among�42

�· 24 possible channels, only

12 of them are non-trivial. All non-trivial channels belongto this case are similar to one of the channels shown inFig. reffig:cases-b or Fig. reffig:cases-c.

• Case (N⇤0 = 3): Among

�43

�· 24 possible channels, only

4 of them are non-trivial. An example of non-trivialchannels of this type is shown in Fig. reffig:cases-d.

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ 00 0

0 ⇤b c

X = 1 X = 2

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ 0⇤ 0

0 a0 b

X = 1 X = 2

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ 00 ⇤

0 ab 0

X = 1 X = 2

s(p(y, yR|x))YR YR

1 2 1 2

Y12

⇤ 0⇤ ⇤

0 ⇤0 0

X = 1 X = 2

Fig. 6. Non-trivial binary PRC’s of case 1 (top-left), case 2 (top-right andbottom left) and case 3 (bottom right), where are nontrivial if one of the letterelements a or b or c be nonzero.

For all non-trivial channel, it is possible to show that GR|Xis fully connected, and it is known that r⇤z = log |YR| (Lemma8, [5]). Thus, no compression is possible.

B. Problem 2 for general alphabets and n using the PRG.

In the following, note that K(n) corresponds to a maximumindependent set of graph G⇥n

X|Y,YR, where n indicates the

number of channel uses. In [5], an n�shot zero-error relaying

Trivial binary PRCs Non-trivial binary PRCs

Fig. 5. Binary PRCs – Trivial (a): α(G(Xn|Y n, Y nR )) = 1, Trivial (b):

α(G(Xn|Y n)) = 2n; and Non-trivial (c) case N∗0 = 1, (d) and (e) N∗0 = 2,and (f) N∗0 = 3, where they are non-trivial if at least one of a or b or c isnonzero.

Proof. Among the 28 binary PRC channels, most are trivial.To identify non-trivial channels restrict the number of ∗’s inone of the inputs (let us assume x = 1), and find all thenontrivial channels for this case. Let N∗0 be the number of non-zero elements in s(p(y, yR|x)). Thus, the nontrivial channelscan be partitioned into:• Case (N∗0 = 1): Among

(41

)· 24 possible channels, only(

41

)· 22 are non-trivial and equivalent to that in Fig 5(c).

• Case (N∗0 = 2): Among(

42

)· 24 possible channels, only

12 are non-trivial and equivalent to Fig. 5(d) or 5(e).• Case (N∗0 = 3): Among

(43

)· 24 possible channels, only

4 are non-trivial and equivalent to that in Fig. 5(f).For all non-trivial channels, it is possible to show that GR|X

is fully connected. Then, using [1, Lemma 8], the Propositionfollows. For example, for channels (c) and (f) in Fig. 5 usingthe method explained in [1], the row corresponding to y =1 implies that node yR = 1 is connected to node yR = 2

in G(1)R|X . Thus, G(1)

R|X is fully connected. Since GX|Y,YRis

edge free for any nontrivial binary PRC it is known that r∗z =log ||YR|| = 1 (Lemma 8, [1]). For PRC binary channels (d)and (e) in Fig. 5, G(1)

R|X is fully connected if at least one ofelements a or b be nonzero, and using [1, Lemma 8] sameresult is obtained. Thus, no compression is possible.

E. Proof of Proposition 6

Proof. The vertices of GR|K are obtained by mapping all thevertices with the same yR in Hs(GK,YR|Y ) to a single vertexyR. There exists an edge between yR, y′R if there exists at leastone edge (x, yR)(x′, yR) in Hs(GK,YR|Y ). Let Ψ be a minimalvertex coloring for compression graph GR|K(n) . Then, Ψ canbe written as Ψ : V (GR|K(n)) → {1, 2, · · · 2nr(n)

z }. GivenΨ, we can derive a vertex coloring φ for Hs(GK(i),YR|Y )�n

where φ : V (Hs(GK(n),YR|Y )�n) → {1, 2, · · · 2nr(n)z }, and

φ(xn, ynR) = Ψ(ynR) ∀(xn, ynR) ∈ V (Hs(GK(n),YR|Y )�n).In what follows we show that φ is a valid coloring function

for Hs(GK(n),YR|Y )�n. Given two distinct vertices (x, yR)and (x′, y′R) ∈ V (Hs(GK(n),YR|Y )�n) let us assume that(x, yR)(x′, y′R) ∈ E(Hs(GK(n),YR|Y )�n), then the definitionE(Hs(GK(n),YR|Y )�n) implies that yR 6= y′R and (yR, y

′R) ∈

E(GR|K(n)). Thus, φ(yR) 6= φ(y′R) and thereby Ψ((x, yR)) 6=

Ψ((x′, y′R)). Because there only exist solid edges in Hs|(n)K ,

Ψ is a valid coloring (not necessary minimal). Note thatΨ is not necessarily a minimal vertex coloring. The mainreason is due to the fact that there might be nodes (x, yR)and (x′, yR) ∈ G(Hs(GK(n),YR|Y )�n) to which all minimalvertex coloring functions assign different colors, while theyall project to the same node in GR|K(n) .