A Userʻs Guide to Compressed Sensing for Communications ... · A Userʻs Guide to Compressed...

Post on 18-Jun-2020

14 views 0 download

Transcript of A Userʻs Guide to Compressed Sensing for Communications ... · A Userʻs Guide to Compressed...

A User‘s Guide to Compressed Sensing for Communications Systems

Kazunori Hayashi Graduate School of Informatics

Kyoto University kazunori@i.kyoto-u.ac.jp

Wireless Signal Processing & Networking Workshop: Emerging Wireless Technologies

2

K. Hayashi, M. Nagahara and T. Tanaka, “A User’s Guide to Compressed Sensing for Communications Systems,” IEICE Trans. Commun. , Vol. E96-B, No.3, pp.685-712, Mar. 2013.

Survey paper on compressed sensing:

3

• A Brief Introduction to Compressed Sensing •  Warming-up •  Linear Equations Review •  Problem Setting •  Algorithms

• Applications to Communications Systems •  Channel Estimation •  Wireless Sensor Network •  Network Tomography •  Spectrum Sensing •  RFID

Agenda

4

A Brief Introduction to Compressed Sensing

5

Warming-up Exercise(1)

-Linear simultaneous equations: x + y + z = 3

x − z = 02x − y + z = 2

x = 1y = 1z = 1

Unique solution

(# of Eqs. = # of unknowns)

6

Warming-up Exercise(2)

No solution ?

x + y + z = 3x − z = 0

2x − y + z = 2x + y = 1

-Linear simultaneous equations:

(# of Eqs. > # of unknowns)

7

Warming-up Exercise(3)

x + y + z = 3x − z = 0

2x − y + z = 2x + y = 2

x = 1y = 1z = 1

Unique solution

-Linear simultaneous equations:

(# of Eqs. > # of unknowns)

8

Warming-up Exercise(4)

x + y + z = 3x − z = 0

Infinitely many solutions

x = t

y = −2t + 3z = t

-Linear simultaneous equations:

(# of Eqs. < # of unknowns)

9

Linear Equations in Engineering Problems

y = Ax

-Answering “No solution” or “Non-unique solutions” is not enough

A ∈ Rm×n

x ∈ Rn

y ∈ Rm

Sensing matrix: Unknown vector: Measurement vector:

-Measurement may include noise y = Ax + v

noise

10Linear Equations Review: Conventional approach ( , noise-free) n = m

-Unique Solution (if   is not singular):

x = A−1y

A

Corresponds to warming-up (1)

y = AxA ∈ R

m×n

x ∈ Rn

y ∈ Rm

Sensing matrix: Unknown vector: Measurement vector:

11

n < m

x = (ATA)−1ATy

-Unique Solution (if  is of full column rank): A

Corresponds to warming-up (3)

Linear Equations Review: Conventional approach ( , noise-free)

y = AxA ∈ R

m×n

x ∈ Rn

y ∈ Rm

Sensing matrix: Unknown vector: Measurement vector:

12Linear Equations Review: Conventional approach ( , noise-free) n > m

y = AxA ∈ R

m×n

x ∈ Rn

y ∈ Rm

Sensing matrix: Unknown vector: Measurement vector:

-Non-unique solutions

Corresponds to warming-up (4)

-Minimum-norm solution: xMN = argmin

x||x||22 subject to Ax = y

= AT(AAT)−1y

Select , which has minimum -norm x �2

13Linear Equations Review: Conventional approach ( , noisy) n < m

Select , which minimize squared error between and , as a solution Ax y

x

y = AxA ∈ R

m×n

x ∈ Rn

y ∈ Rm

Sensing matrix: Unknown vector: Measurement vector:

-No solution in general ( is not included in Img. of due to noise) Ay

Corresponds to warming-up (2)

-Least-square (LS) solution: xLS = argmin

x||Ax− y||22 = (ATA)−1ATy

14Linear Equations Review: Conventional approach ( , noisy)

-Non-unique solutions

n > m

y = AxA ∈ R

m×n

x ∈ Rn

y ∈ Rm

Sensing matrix: Unknown vector: Measurement vector:

-Regularized LS solution:

:regularization parameter λ

xrLS = argminx

(||Ax− y||22 + λ||x||22)

= (λI+ATA)−1ATy

15

Problem Setting of Compressed Sensing

y = AxA ∈ R

m×n

x ∈ Rn

y ∈ Rm

Sensing matrix (non-adaptive): Unknown vector: Measurement vector:

Linear equations of     with a prior information that the true solution is sparse

n > m

Underdetermined: Sparse unknown vector:

n > m||x||0 � n

||x||0 : # of nonzero elements of  x

16

Signal Recovery via Optimization  

-Natural and straightforward approach:

x�0 = arg minx

||x||0 subject to Ax = y

- if      , almost always true solution is obtained - NP hard in general

m > ||x||0

�0

17

Signal Recovery via Optimization   Replace with

x�1 = arg minx

||x||1 subject to Ax = y

�1

-It can be posed as linear programing (LP) problem -True solution can be obtained even when under some conditions

n > m

||x||0 ||x||1 =n∑

i=1

|xi|

If satisfies RIP (restricted isometry property), only measurement will be enough for exact recovery

O(k log(n/k))A

18

x�1 = arg minx

||x||1 subject to Ax = y

Intuitive Illustrations of Sparse Signal Recovery

�1 recovery:

MN solution: xMN = arg minx

||x||22 subject to Ax = y

x

y = Ax y = Ax

x

00

19Reconstruction with Noisy Measurement(1/2)

x�1 = arg minx

||x||1 subject to ||Ax − y||2 ≤ ε

Constrained  reconstruction: �1

-  Natural constraint for bounded noise -  A certain guarantee of signal recovery can

be given for Gaussian noise

(ε > 0)

20

    optimization: �1-�2x�1-�2 = arg min

x

(12||Ax − y||22 + λ||x||1

)

- Equivalent to constrained reconstruction for a certain choice of

-  Analogy to regularized LS problem: λ

�1

xrLS = arg minx

(||Ax − y||22 + λ||x||22)

- LARS (least angle regression) algorithm

Reconstruction with Noisy Measurement(2/2)

21

Equivalent approaches to optimization

-Maximum a posteriori (MAP) estimator for AWGN with Laplacian prior ∝ exp(−λ||x||1)

�1-�2-Lasso (Least absolute shrinkage and selection operator): xLasso = arg min

x||Ax − y||22 subject to ||x||1 ≤ t

(t > 0)

22

Algorithms for Compressed Sensing

http://www.msys.sys.i.kyoto-u.ac.jp/csdemo.html

Sample program (Scilab):

23

Applications to Communications Systems

24

Key issues in applications

•  Sparsity of signals of interest �  Is it natural to assume the sparsity of them? �  In which domain?

•  Linear measurement of the signals �  Can we formulate with linear equation? �  Is it happy to reduce the number of equations?

25

Channel Estimation

Receiver

Multipath channel

Estimation of channel impulse response using Rx. signals

Transmitter

Convolution channel Tx. signal Rx. signal Noise

yn

vn

sn h0, . . . , hM

yn =M∑

m=0

hmsn−m + vn

26

s = [s1, . . . , sP ]T ∈ RPKnown pilot signal:

-Received signal vector:

Channel matrix:

White noise:

y = [y1, . . . , yP−M ]T

= Hs + v

H =

⎡⎢⎢⎢⎢⎣

hM . . . h0 0 . . . 0

0 hM h0. . .

......

. . . . . . . . . 00 . . . 0 hM . . . h0

⎤⎥⎥⎥⎥⎦

(P − M) × P

v = [v1, . . . , vP−M ]T

Channel Estimation

27

y = Sh + v

S =

⎡⎢⎢⎢⎣

sM+1 sM . . . s1

sM+2 sM+1 . . . s2

......

...sP sP−1 . . . sP−M

⎤⎥⎥⎥⎦

h = [h0, . . . , hM ]T ∈ RM+1

-LS solution : h = (STS)−1STy(P − M ≥ M + 1)

(P − M) × (M + 1)

-MN solution : h = ST(SST)−1y(P − M < M + 1)

Channel Estimation -Received signal vector (rewritten):

-Conventional approaches:

28

-Estimation via compressed sensing:

hS

h = arg minh

||h||1 subject to ||Sh − y||2 ≤ ε

:Sensing matrix(Random, Toeplitz) :Unknown sparse channel impulse response

Merit: Reduction of pilot signals (Improvement of frequency efficiency)

-Sparsity of wireless channel impulse response: Wireless channel is discrete in nature, and the feature appears as the bandwidth increases.

Channel Estimation

29

Wireless Sensor Network (Single-Hop)

yi =n∑

j=1

Ai,jsj + vi

-Rx. signal@time @ fusion center

y = [y1, . . . , ym]T

= As + v

i

sj :measurement data Ai,j :random coefficients

(generated from node id)

-Tx. signal@time  @ -th node: Ai,jsj

i j

30

-Sparsity of measurement data:

Merit: Reduction of data collection time

-Data recovery via compressed sensing:

:sensing matrix :unknown sparse vector

x = arg minx

||x||1 subject to ||AΦx − y||2 ≤ ε

AΦx

Wireless Sensor Network (Single-Hop)

(  : invertible transformation matrix, typically denoting DFT, DCT, wavelet, etc.)

s = Φx

might be sparse in a certain representation with basis (due to spatial correlation of data)

Φ

s Φ

31

yi =∑j∈Pi

Ai,jsj

Pi

Ai,j

-Rx. signal@sink node in the -th multi-hop communication:

i

: index set of nodes included in the -th multi-pop communication i: random coefficient of the -th node in the -th multi-hop communication ( if )

ij

Ai,j = 0 j /∈ Pi

y = AΦxSensing matrix depends on routing algorithm

Merit: Reduction of # of communications (power consumption)

Wireless Sensor Network (Multi-Hop)

32

Network Tomography

Network Terminal (source)

Terminal (destination)

Router Link

Path

-Link-level parameter estimation based on end-end measurements -End-End performance evaluation based on link-level measurements

33

xj-delay @ link : ej

y = Ax A: Routing Matrix

yi =∑j∈Pi

Ai,jxj

Pi

-total delay in the -th path: i

: set of link index included in the -th path

Ai,j =

{1 j ∈ Pi

0 otherwise

i

Delay Tomography (Link Delay Estimation)

34

-Sparsity of link delay: Only limited number of links have large delay, due to node failure or some other reasons in typical network

Merit: Reduction of # of prove packets

x = arg minx

||x||1 subject to Ax = y-Estimation via compressed sensing:

:sensing matrix (routing matrix) :(approximately) sparse delay vector x

A

Delay Tomography (Link Delay Estimation)

35

-packet loss probability @ link :

Loss Tomography (Link Loss Probability Estimation)

pjej

1 − qi =∏

j∈Pi

(1 − pj)-packet success probability @ -th path: i

-packet loss probability @ -th path: i qi

− log(1 − qi) = − log

⎛⎝ ∏

j∈Pi

(1 − pj)

⎞⎠

= −∑j∈Pi

log(1 − pj)

xj = − log(1 − pj)

y = Axyi =∑j∈Pi

Ai,jxj

yi = − log(1 − qi)

36

-Rx. signal (continuous time):

Spectrum Sensing

r(t) t ∈ [0, nTs]

Ts:inverse of Nyquist rate

rt = [r(Ts), . . . , r(nTs)]T-Rx. signal with Nyquist rate sampling:

yt = Ψrt

Ψ

-Rx. signal with (discrete time) linear measurement:

m × n: sensing matrix

(     sub-Nyquist rate sampling) m < n

37

rf = Drt

-Sparsity of occupied spectrum: Primary system uses

D:DFT matrix

yt = ΨDHrf

-Spectrum sensing via compressed sensing:

:Sensing matrix :Sparse spectrum vector

ΨDH

rf

Merit: Reduction of sampling rate

Spectrum Sensing

38

zs = ΓDΦsrt

Sparsity of spectrum edge

-Edge spectrum

Φs: smoothing function

Γ =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

1 0 . . . . . . 0

−1 1. . .

...

0. . . . . . . . .

......

. . . . . . . . . 00 . . . 0 −1 1

⎤⎥⎥⎥⎥⎥⎥⎥⎦

yt = Ψ(ΓDΦs)−1zs

Spectrum Sensing

39

RFID

Reader Tag

-Each tag sends its own ID by reflecting wireless signal from reader -Conventionally, collision avoiding protocols are used

40

RFID Tag Detection

-ID of tag : j

-Received signal @ reader:

y =∑j∈P

aj

= [a1, . . . ,an]x= Ax

P:Index set of tags in reader’s range

xj =

{1 j ∈ P0 otherwise

aj (j = 1, . . . , n)

41

-Sparsity of RFID tags: Number of tags in the reader’s range is much smaller than that of whole ID space (similar to CS-based MUD in the previous talk)

Merit: Reduction of detection time

-Tag detection via compressed sensing: x = arg min

x||x||1 subject to Ax = y

:Sensing matrix (ID matrix) :Unknown sparse vector x

A

RFID Tag Detection