On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.
-
date post
20-Dec-2015 -
Category
Documents
-
view
216 -
download
0
Transcript of On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.
Uniform Algorithm
Averaged over random inputs Averaged over the internal coin tosses of the
algorithm
Amplification of Hardness
Starting from a problem that is know (or assumed) to be hard on average in a weak sense, we can define a related new problem which is hard on average in the strongest possible sense
Yao’s XOR Lemma
From a Boolean function f: {0,1}n→{0,1}, we define a new function f k(x1,…,xk) = f(x1) … f(xk).
Yao’s XOR Lemma says if every circuit of size ≤ S makes at least a δ faction of errors in computing f(x) for a random x, then every circuit of size ≤ S•poly(δε/k) makes at least a 1/2 – ε fraction of errors in computing f k(), where ε is constant and roughly Ω((1-δ)k)
Amplification of Hardness in NP
Want to prove: if L is a language in NP such that every efficient algorithm (or small family of circuits) errs on at least a 1/poly(n) fraction of inputs of length n, then there is a language L’ also in NP such that every efficient algorithm (or small circuit) errs on a 1/2−1/n(1) fraction of inputs
Yao’s XOR Lemma cannot prove this directly
Previous Work
O’Donnell proved that for every balanced Boolean function f: {0,1}n→{0,1} and parameters ε, δ (>0), there is an integer k = poly(1/ε, 1/δ) and a monotone function g: {0,1}k→{0,1}, such that if every circuit of size S makes at least a δ fraction of errors in computing f(x) given x, then every circuit of size S •poly(ε, δ) makes at least a 1/2 − ε fraction of errors in computing fg,k = g(f(x1), . . , f(xk)) given (x1, . . , xk), then there is a circuit of size poly(1/ε, 1/δ)•S that makes at most a δ fraction of errors in computing f(x)
Balanced Problems
This proof only works for balanced decision problems, i.e., for a random instance of a given length n, there is a probability 1/2 that the answer is YES and a probability 1/2 that the answer is NO
Some Improvement
For balanced problems, Dr. O’Donnell proved the amplification of hardness from 1-1/poly(n) to 1/2 +1/n1/2- ε
For general problems, he proved the amplification of hardness from 1-1/poly(n) to 1/2 +1/n1/3- ε
For balanced problems, Healy et al improved the amplification range from 1-1/poly(n) to 1/2 +1/poly(n)
Dr. Trevisan’s Previous Contribution
In FOCS 03, Dr. Trevisan proved:
With every language L in NP, there is a probabilistic
poly-time algorithm that accept probability
≥ 3/4+1/(log n)α with input length n. Then we can
extend this range to 1-1/p(n). He improved the amplification range from
1-1/(log n)α to 3/4 +1/(log n)α, where α > 0 and α=const
Major Contribution of This Paper
The uniform analysis of amplification of hardness using the majority function
Proved lemma that with every language L in NP, there is a probabilistic poly-time algorithm that accept probability≥ 1/2+1/(log n)α with input length n. Then for every language in NP and polynomial p, there is a probabilistic poly-time algorithm that succeeds with probability 1-1/p(n) on input with length n, where α > 0 and α=const
Majority Function
If L NP with input length n and ∈f: {0,1}n→{0,1} to the Boolean function, for an odd integer k, we define:
,1 1
,
( ,..., ) : majority ( ),..., ( )
() is still a balanced function
maj kk k
maj k
f x x f x f x
f
Proof Main Idea I
For every problem in NP, there is an efficient algorithm that solves the problem on a 1- ε fraction of inputs with length n, then for every problem in NP, there is an efficient algorithm that solve the problem on a 1-1/p(n) fraction with input length n with a small amount of non-uniformity
Proof Main Idea II
Based on the balanced function f(), a new function , is introduced. If an efficient algorithm can solve F on a 1- ε fraction of inputs, then there will be an efficient algorithm that can solve f on a fraction of inputs (ε is a positive constant). To simplify the proof procedure, t is set to 1/5 in this paper.
1 ( )
: 0,1 0,1O tn
F
1 1 tn
Proof Main Idea III
For every search problem in NP and every p(), there is an efficient algorithm that solves the search problem on a 1-1/p(n) fraction with input length n and a small amount of non-uniformity
Eliminating the non-uniformity
Detailed Proof of II
We want to prove:– L in NP and L’ is poly bounded by a computable
function l(n)– If circuit C’ solves L’ on α≥1-ε with input length
l(n)– Then another circuit C can solve L on
with input length n
1 51 2 n
Parameter Settings
t = 1/5 a>δ Then we further define:
2 7 and k k c
7 8 6 8 20 8 7: 2a
0 015
7 /8 ,1 1 12/ 7
1
1
1 maj k
i i i i i i i ii
f fn
k f f n n k
Proof
Solve δi and ki recursively, we obtain:
Let r to be the largest index such that δr<α. Then we could know
We also could know that the length of fr is:
11 2 7
5 7 8
1 7
5 8
1,
i
ii ik n
n
8/ 7 2r a
21 2 7 7 7 1 2 161 1 ... 1 8 15 7 8 8 8 5 7 35
1 2...
r
rn k k k n n n
Proof cont’d
Based on majority function, we can let:
g() is the majority function and With recursively apply a lemma, we are able
to obtain circuit C0 agrees f on at least fraction of inputs
1 1( ,..., ) ( ( ),..., ( ))r K Kf x x g f x f x16/35K n
0 15
11 1
n
Proof cont’d
Now we need to create another circuit Cr with input length nK and
Then Cr should agree fraction of inputs
Furthermore, we could construct a circuit C which agrees with f at least faction of inputs
Finally, we conclude that C agrees with L on at least fraction of inputs
16/35K n
1 1 1n K K n
1/51 1 n
1/5 1/51 1 1 1 2n n n
Conclusion
With the proof, we are able to convert a small error into a larger number of error
Generalized the amplification of hardness problem in NP
Input length is an important factor to decide the success probability