Perspectives on Information Causality

27
Perspectives on Information Causality Tony Short University of Cambridge (with Sabri Al-Safi – PRA 84, 042323 (2011))

description

Perspectives on Information Causality. Tony Short University of Cambridge (with Sabri Al-Safi – PRA 84, 042323 (2011)). Overview. Comparing the information-theoretic capabilities of quantum theory with other possible theories can help us: Understand why nature is quantum - PowerPoint PPT Presentation

Transcript of Perspectives on Information Causality

Page 1: Perspectives on  Information Causality

Perspectives on Information Causality

Tony ShortUniversity of Cambridge

(with Sabri Al-Safi – PRA 84, 042323 (2011))

Page 2: Perspectives on  Information Causality

Overview

Comparing the information-theoretic capabilities of quantum theory with other possible theories can help us: Understand why nature is quantum Hone our intuitions about quantum applications

Surprisingly, despite entanglement, quantum theory is no better than classical for some non-local tasks.... Why? Non-local computation [Linden et al., 2007] Guess your neighbour’s input [Almeida et al. 2010] Information causality [Pawlowski et al. 2009]

Page 3: Perspectives on  Information Causality

The CHSH game

What correlations P(ab|xy) are achievable given certain resources? What is the maximum success probability p in this game?

b {0,1}

Alice

Random x{0,1} Random y{0,1}

Bob

a {0,1} xyba :Task

Shared resources

Page 4: Perspectives on  Information Causality

Local (classical):P(a,b|x,y) = Σλ qλ Pλ(a|x)Pλ(b|y)pC 3/4 (Bell’s Theorem - CHSH inequality)

Quantum:P(a,b|x,y) = Tr(Px

a ⊗ Pyb ρ)

pQ (2+√2)/4 (Tsirelson’s bound)

General (box-world): Σa P(a,b|x,y) independent of x Non-signalling conditions Σb P(a,b|x,y) independent of ypG 1 (PR-boxes)

Page 5: Perspectives on  Information Causality

PR-box correlations [Popescu, Rohrlich (1994)] Optimal non-signalling correlations (p = 1)

Problem: Is there a good, physical intuition behind pQ (2 + √2) / 4 ?

if if

xybaxyba

xyabPPR 021

)|(

x

b

a b

y

Page 6: Perspectives on  Information Causality

Information Causality

Information causality relates to a particular communication task [Pawlowski et al, Nature 461, 1101 (2009)]

m classical bits

by

Alice

N random bits x1 ... xN Random y{1,...,N}

Bob

(Bob’s best Guess of xy) yy

N

y

bxI :1

Task : Maximize

Page 7: Perspectives on  Information Causality

I(x:y) is the classical mutual information

The Information causality principle states

Physical intuition: The total information that Bob can extract about Alice’s N bits must be no greater than the m bits Alice sends him. However, note that Bob only guesses 1 bit in each game.

The bound on J can easily be saturated: Alice simply sends Bob the first m bits of a.

x

xpxpXHXYHYHXHYXI )(log)()()()()(: 2

mbxIJ yy

N

y

:1

Page 8: Perspectives on  Information Causality

Information Causality is obeyed in quantum theory and classical theory, and in any theory in which a ‘good’ measure of mutual information can be defined (see later)

Information Causality can be violated by general non-signalling correlations. E.g. One can achieve J=N >> m=1, using

Information Causality can be violated using any correlations which violate Tsirelson’s bound for the CHSH game (when N=2n, m=1)

otherwise

xbayabP y

02/1

)|( x

mJp

422

Page 9: Perspectives on  Information Causality

Hence

Information Causality Tsirelson’s bound

Furthermore, it can even generate part of the curved surface of quantum correlations [Allcock, Brunner, Pawlowski, Scarani 2009]

But why is this particular task and figure of merit J so important? What about probability of success in the game?

Given that J is a strange non-linear function of the probabilities, how does this yield nice bounds on quantum correlations Is mutual information central to quantum theory?

Page 10: Perspectives on  Information Causality

I.C. - A probabilistic perspective

If we use probability of success in the Information Causality game, quantum theory can do better than classical

m classical bits

by

Alice

N random bits x1 ... xN Random y{1,...,N}

Bob

(Bob’s best Guess of xy) yy xb ProbTask :

Maximize

Page 11: Perspectives on  Information Causality

When m=1, N=2, maximum success probabilities are the same as for the CHSH game

The m=1 case for general N has been studied as ‘Random access coding’ [Ambainis et al 2008, Pawlowski & Zukowski 2010]

1422

43

GQC ppp

N

Np NNC 21

211

211

21

211

N

pQ 1121

(Known to be tight for N=2k3j )

Page 12: Perspectives on  Information Causality

Furthermore, J=y I(xy:by) and the success probability are not monotonically related. E.g. For N=2, m=1

Strategy 1: Alice sends x1 with a bit of noiseJ= 1-, p=3/4- ’

Strategy 2: Alice sends either x1 or x2 perfectly, based on random bit shared with Bob J0.38, p= ¾

What is the relation between bounds on J and on the success probability, and how do these relate to Tsirelson’s bound?

Page 13: Perspectives on  Information Causality

Define py as the probability of success when Bob is given y, and the corresponding bias Ey= 2py – 1

When proving Tsirelson’s bound, the crucial step uses a quadratic bound on the entropy

when m=1, Information Causality therefore implies

Can we derive a ‘quadratic bias bound’ like this directly?

y y

yyy

yyy

EppHbxIJ

2ln21,1:

2

N

yyE

1

2 2ln2

Page 14: Perspectives on  Information Causality

Information Causality as a non-local game

It is helpful to consider a nonlocal version of the Information Causality game. This is at least as hard as the previous version with m=1 (as Alice can send the message a, and Bob output by=a b )

b

Alice

N random bits x1 ... xNRandom y{1,...,N}

Bob

ayxba :Task

Page 15: Perspectives on  Information Causality

For any quantum strategy

Using similar techniques to those in the non-local computation paper [Linden et al (2007)] we define

and note that

yyx xba

xNyE

ˆ)1(21

x

xb

Nyx

a

Nxx yyx

ˆˆ )1(21)1(

21

y yyyyE 12

Page 16: Perspectives on  Information Causality

Hence we obtain the quantum bound

This is easily saturated classically (a=x1, b=0) With this figure of merit quantum theory is no better than

classical. Yet with general correlations the sum can equal N It is stronger than the bound given by Information Causality

(y Ey2 2ln2)

Furthermore any set of biases Ey satisfying Σy Ey2 ≤ 1 is quantum

realizable. This bound therefore characterizes the achievable set of biases more comprehensively than Information Causality.

11

2

N

yyE

Page 17: Perspectives on  Information Causality

When we set all Ey equal, then Ey = 1/N, and we achieve

As this non-local game is at least as hard as the original, we can achieve the previously known upper bound on the success probability of the (m=1) Information Causality game for all N.

We can easily extend the proof to get quadratic bounds for a more general class of inner product games

N

pQ 1121

Page 18: Perspectives on  Information Causality

Inner product game (with Bob’s input having any distribution)

When Bob’s bit string is restricted to contain a single 1, this implies the Information Causality result. When N=1, it yields Tsirelson’s bound, and the stronger quadratic version [Uffink 2002]

b

Alice

N random bits x1 ... xN

Bob

a yx.ba:Task

N bits y1 ... yN

y

yE 12

Page 19: Perspectives on  Information Causality

Summary of probabilistic perspective

The form of the mutual information does not seem crucial in deriving Tsirelson’s bound from Information Causality.

Instead, quadratic bias bounds seem to naturally characterise quantum correlations. The inner product game with figure of merit y Ey

2 is another task for which quantum theory is no better than classical, but which slightly-stronger correlations help with.

Page 20: Perspectives on  Information Causality

I.C. - An entropic perspective

The key role of the mutual information is in deriving Information Causality. The bound J m follows from the existence of a mutual information I(X:Y) for all systems XY, satisfying:

1. Symmetry I(X:Y) = I(Y:X)2. Consistency I(X:Y)= Classical I when X, Y are classical3. Data Processing I(X:Y) ≥ I(X:T(Y)) for any transformation T 4. Chain Rule I(XY:Z) – I(X:Z) = I(Y:XZ) – I(X:Y)

(Plus the existence of some natural transformations)

Page 21: Perspectives on  Information Causality

But mutual information is a complicated quantity (two arguments), and this list of properties is quite extensive.

Instead, we can derive Information Causality from the existence of an entropy H(X), defined for all systems X in the theory, satisfying just 2 conditions:

1. Consistency H(X)= Shannon entropy when X is classical2. Local Evolution ΔH(XY) ≥ ΔH(X)+ ΔH(Y)

for any local transformation on X and Y

The intuition behind the 2nd condition is that local transformations can destroy but not create correlations, generally leading to more uncertainty than their local effect.

Page 22: Perspectives on  Information Causality

To derive information causality, we can use H to construct a measure of mutual information I(X:Y)=H(X)+H(Y) – H(XY), then use the original proof. The desired properties of I(X:Y) follow simply

1. Symmetry trivial2. Consistency from consistency of H(X)3. Data Processing equivalent to Local Evolution of H(X) 4. Chain Rule trivial

Hence, Information causality holds in any theory which admits a `good’ measure of entropy. I.e. One which obeys Consistency and Local Evolution. The Shannon and von Neumann entropies are both `good’.

Page 23: Perspectives on  Information Causality

We can prove that any `good’ entropy shares the following standard properties of the Shannon and Von Neumann entropies:

Subadditivity H(X,Y) ≤ H(X) + H(Y) Strong subadditivity H(X1X2| Y) ≤ H(X1| Y) + H(X2| Y) Classical positivity H(X | Y) ≥ 0 whenever X is classical

(where we have defined H(X|Y)= H(XY)-H(Y) )

Instead of proceeding via the mutual information, we can use these relations to derive information causality directly.

Page 24: Perspectives on  Information Causality

This actually allows us to prove a slight generalisation of Information Causality:

This generalized form of Information Causality makes no assumptions about the distribution on Alice’s inputs x1...xN.

The intuition here is that the uncertainty that Bob has about Alice’s bits at the end of the game, must be greater than the original uncertainty about her inputs minus the information gained by the message.

y

yy mHbxH )()|( x

Page 25: Perspectives on  Information Causality

Entropy in general probabilistic theories

We can define an entropy operationally in any theory [Short, Wehner / Barrett et al. / Kimura et al. (2010) ]

Measurement entropy: H(X) is the minimal Shannon entropy of the outputs for a fine-grained measurement on X

Decomposition entropy: H(X) is the minimal Shannon entropy of the coefficients when X is written as a mixture of pure states.

These both obey consistency, and give the von Neumann entropy for quantum theory. However, for many theories they violatelocal evolution.

Page 26: Perspectives on  Information Causality

Entropy and Tsirelson’s bound (also in Dahlsten et al. 2011)

Finally note that due to information causality,

Existence of a `good’ entropy Tsirelson’s bound

The existence of a `good’ measure of entropy seems like a very general property, yet remarkably it leads to a very specific piece of quantum structure.

This also means that no ‘good’ measure of entropy exists in physical theories more nonlocal than Tsirelson’s bound, (such as box-world, which admits all non-signalling correlations).

Page 27: Perspectives on  Information Causality

Summary and open questions

Quantum theory satisfies and saturates a simple quadratic bias bound y Ey

2 1 for the Inner Product and Information Causality games, which generalises Tsirelson’s bound. Can we find other similar quadratic bounds?

The existence of a ‘good’ measure of entropy in a theory (satisfying just 2 properties) is sufficient to derive information causality and Tsirelson’s bound. Is quantum theory the most general case with such an entropy? Is there a connection to thermodynamics?