On worst-case to average-case reductions for NP

Post on 11-Jan-2016

27 views 0 download

description

On worst-case to average-case reductions for NP. Danny Gutfreund (Harvard) Ronen Shaltiel (Haifa U.) and Amnon Ta-Shma (Tel-Aviv U.). Negative results. Thm: [BT,FF] If PH does not collapse, then there is no non-adaptive reduction from solving SAT - PowerPoint PPT Presentation

Transcript of On worst-case to average-case reductions for NP

On worst-case to average-case reductions for NP

Danny Gutfreund (Harvard) Ronen Shaltiel (Haifa U.) and

Amnon Ta-Shma (Tel-Aviv U.)

Negative results

Thm: [BT,FF]

If PH does not collapse, then there is no non-adaptive

reduction

from solving SAT to solving (L,U) for some L in NP.

Reduction = Turing reduction

A computational task A reduces to computational task B, if

There exists an efficient oracle machine R,

such that for any O, if O solves B then RO solves A.

The [BT] results generalizes

Thm: If PH does not collapse* then there is no non-adaptive reduction

from solving SAT to solving (SAT,D).

Where D is any distribution samplable in quasi-polynomial time.

Search to decision reduction

RB is the search-algorithm for SAT, using B as a decision algorithm for

SAT.

Note: Can be a non-adaptive reduction.

GST

Thm: [GST] There exists some distribution D

samplable in quasi-polynomial time, such that

If BSAT is a probabilistic, polynomial time algorithm solving (SAT,D) well on the average,

Then, RBSAT solves SAT.

In other words

There is a reduction from solving SAT to solving

(SAT,D), where D is some distribution

samplable in quasi-polynomial time.

Reductions again

When we say “reduction” we mean several things:

We mean that R has black-box access to O (that solves B).

We mean that RO is correct whenever O is.

More on reductions The first condition tells us that the

reduction does not need to know about the actual way B operates.

The second condition tells us that the correctness proof does not need to know about the way B operates.

These are two separate issues!!!

Class Reduction

A computational task A C-reduces to computational task B, if

there exists an efficient oracle machine R,

such that for any O in C, if O solves B then RO solves A.

We saw

If PH does not collapse*

- One can not achieve the GST reduction with a non-adaptive reduction, but

- One can achieve the GST reduction with a non-adaptive, BPP-class reduction.

So

Now, that we have no negative results to stop us,

Can we make progress on the worst-case to avg-case problem for NP?

The cryptographic goal

Prove a polynomial-time reduction from SAT to (L,D), for some

L in NP, and polynomially samplable D.

IL If (L,D) is average-case hard for

some L in NP and samplable D, then

(L,U) is average-case hard for some L in NP.

A more modest goal

Prove a polynomial-time class reduction from SAT to (L,U), for L in NTime(t(n)).

t(n) =nc – cryptographic setting t(n) =super-poly(n) – complexity

setting

NOT KNOWN for any sub-exponential t(n)

Have vs. Want: Have: A polynomial-time class

reduction from SAT to (SAT,D), for D samplable in super-polynomial time.

Want: A polynomial-time class reduction from SAT to (L,U), for L in .

PN~

Idea: use [IL]

(SAT,D )not in AvgBPP, D is super-poly

(L,U )not in AvgBPP, L is super-poly

SAT not in BPP

ProblemThe reduction time depends on the

complexity of D.

Not useful.

We get an algorithm for (L,D) taking more resources than D, which [GST] does not contradict.

The main theorem Under a weak derandomizaion

assumption:

Thm: There exists L in s.t.,

BPPAvgULBPPNP o )1(2/1),(

PN~

The Assumption in detail

For every c, for every probabilistic polynomial-time

A using nc coins, There exists a probabilistic polynomial

time algorithm A’, using only n coins, s.t.

For any samplable distribution DPr {x in D} [ |A(x) – A’(x)| ≥ 1/10 ] ≤ 1/10

L

The proof In spite of all, let use IL.

Observation: the complexity of the reduction can be made to depend on the number of coins of D, and not on the running time of D.

The new language depends on the running time of D.

The main idea While we do not know how to save

on time, we believe we can save on random coins.

Use the derandomization assumption to reduce the number of coins of D.

The reality It works but takes effort. We need to derandomize

procedures that output non-boolean values, which we usually can not derandomize.

This forces us to go back to [GST] and modify the proof to get the derandomized version.

Summary Negative results showed there are no

non-adaptive worst-case to average-case reduction.

We show class reductions exist, where regular reductions are ruled out.

Can we now solve the complexity version of the worst-case to average-case reduction for NP?