Lower Bounds In Kernelization

128
why some problems are incompressible On the Infeasibility of Obtaining Polynomial Kernels
  • date post

    21-Oct-2014
  • Category

    Education

  • view

    1.547
  • download

    1

description

 

Transcript of Lower Bounds In Kernelization

Page 1: Lower Bounds In Kernelization

why some problems are incompressible

On the Infeasibility of

Obtaining Polynomial Kernels

Page 2: Lower Bounds In Kernelization

PROBLEM KERNELS

whatwhyhow

when

no polynomial kernels

Examples

WorkarouNds

satisfiability disjoint factors

many kernels

vertex-disjoint cycles

Page 3: Lower Bounds In Kernelization

PROBLEM KERNELS

whatwhyhow

when

no polynomial kernels

Examples

WorkarouNds

satisfiability disjoint factors

many kernels

vertex-disjoint cycles

Page 4: Lower Bounds In Kernelization

ere are people at a professor’s party.

e professor would like to locate a group of six people who are popular.

i.e, everyone is a friend of at least one of the six.

Being a busy man, he asks two of his students to find such a group.

e first grimaces and starts making a list of(256

)possibilities.

e second knows that no one in the party has more than three friends.

Academicians tend to be lonely.

Page 5: Lower Bounds In Kernelization

ere are people at a professor’s party.

e professor would like to locate a group of six people who are popular.

i.e, everyone is a friend of at least one of the six.

Being a busy man, he asks two of his students to find such a group.

e first grimaces and starts making a list of(256

)possibilities.

e second knows that no one in the party has more than three friends.

Academicians tend to be lonely.

Page 6: Lower Bounds In Kernelization

ere is a graph on n vertices.

e professor would like to locate a group of six people who are popular.

i.e, everyone is a friend of at least one of the six.

Being a busy man, he asks two of his students to find such a group.

e first grimaces and starts making a list of(256

)possibilities.

e second knows that no one in the party has more than three friends.

Academicians tend to be lonely.

Page 7: Lower Bounds In Kernelization

ere is a graph on n vertices.

Is there a dominating set of size at most k?

i.e, everyone is a friend of at least one of the six.

Being a busy man, he asks two of his students to find such a group.

e first grimaces and starts making a list of(256

)possibilities.

e second knows that no one in the party has more than three friends.

Academicians tend to be lonely.

Page 8: Lower Bounds In Kernelization

ere is a graph on n vertices.

Is there a dominating set of size at most k?

(Every vertex is a neighbor of at least one of the k vertices.)

Being a busy man, he asks two of his students to find such a group.

e first grimaces and starts making a list of(256

)possibilities.

e second knows that no one in the party has more than three friends.

Academicians tend to be lonely.

Page 9: Lower Bounds In Kernelization

ere is a graph on n vertices.

Is there a dominating set of size at most k?

(Every vertex is a neighbor of at least one of the k vertices.)

e problem is NP–complete,

e first grimaces and starts making a list of(256

)possibilities.

e second knows that no one in the party has more than three friends.

Academicians tend to be lonely.

Page 10: Lower Bounds In Kernelization

ere is a graph on n vertices.

Is there a dominating set of size at most k?

(Every vertex is a neighbor of at least one of the k vertices.)

e problem is NP–complete,

but is trivial on “large” graphs of bounded degree,

e second knows that no one in the party has more than three friends.

Academicians tend to be lonely.

Page 11: Lower Bounds In Kernelization

ere is a graph on n vertices.

Is there a dominating set of size at most k?

(Every vertex is a neighbor of at least one of the k vertices.)

e problem is NP–complete,

but is trivial on “large” graphs of bounded degree,

as you can say NO whenever n > kb.

Page 12: Lower Bounds In Kernelization

Pre-processing is a humble strategy for coping with hard problems,almost universally employed.

Mike Fellows

Page 13: Lower Bounds In Kernelization

We would like to talk about the “preprocessing complexity” of a problem.

To what extent may we simplify/compress a problem before we begin tosolve it?

Note that any compression routine has to run efficiently to look attractive.

Page 14: Lower Bounds In Kernelization

is makes it unreasonable for any NP–complete problem to admitcompression algorithms.

However, (some) NP–complete problems can be compressed.

We extend our notion (and notation) to understand them.

Page 15: Lower Bounds In Kernelization

Notation

We denote a parameterized problem as a pair (Q, κ) consisting of a clas-sical problem Q ⊆ {0, 1}∗ and a parameterization κ : {0, 1}∗ → N.

Page 16: Lower Bounds In Kernelization

.

.Size of the Kernel

.

.Kernel

Data reduction

involves pruning down

a large input

into an equivalent,

significantly smaller object,

quickly.

Page 17: Lower Bounds In Kernelization

.

.Size of the Kernel

.

.Kernel

Data reduction

is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that

a large input

into an equivalent,

significantly smaller object,

quickly.

Page 18: Lower Bounds In Kernelization

.

.Size of the Kernel

.

.Kernel

Data reduction

is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that

for all (x, k), |x| = n

into an equivalent,

significantly smaller object,

quickly.

Page 19: Lower Bounds In Kernelization

.

.Size of the Kernel

.

.Kernel

Data reduction

is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that

for all (x, k), |x| = n

f(x), k ′ ∈ L iff (x, k) ∈ L,

significantly smaller object,

quickly.

Page 20: Lower Bounds In Kernelization

.

.Size of the Kernel

.

.Kernel

Data reduction

is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that

for all (x, k), |x| = n

f(x), k ′ ∈ L iff (x, k) ∈ L,

|f(x)| = g(k),

quickly.

Page 21: Lower Bounds In Kernelization

.

.Size of the Kernel

.

.Kernel

Data reduction

is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that

for all (x, k), |x| = n

f(x), k ′ ∈ L iff (x, k) ∈ L,

|f(x)| = g(k),

and f is polytime computable.

Page 22: Lower Bounds In Kernelization

.

.Size of the Kernel

.

.Kernel

A kernelization procedure

is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that

for all (x, k), |x| = n

f(x), k ′ ∈ L iff (x, k) ∈ L,

|f(x)| = g(k),

and f is polytime computable.

Page 23: Lower Bounds In Kernelization

.

.Size of the Kernel

.

.Kernel

A kernelization procedure

is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that

for all (x, k), |x| = n

(f(x), k ′) ∈ L iff (x, k) ∈ L,

|f(x)| = g(k),

and f is polytime computable.

Page 24: Lower Bounds In Kernelization

.

.Size of the Kernel

.

.Size of the Kernel

A kernelization procedure

is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that

for all (x, k), |x| = n

f(x), k ′ ∈ L iff (x, k) ∈ L,

|f(x)| = g(k),

and f is polytime computable.

Page 25: Lower Bounds In Kernelization

eorem

Having a kernelization procedure implies, and is implied by,parameterized tractability.

Definition

A parameterized problem L is fixed-parameter tractable if there exists analgorithm that decides in f (k) · nO(1) time whether (x, k) ∈ L, wheren := |x|, k := κ(x), and f is a computable function that does not dependon n.

Page 26: Lower Bounds In Kernelization

eorem

Having a kernelization procedure implies, and is implied by,parameterized tractability.

Definition

A parameterized problem L is fixed-parameter tractable if there exists analgorithm that decides in f (k) · nO(1) time whether (x, k) ∈ L, wheren := |x|, k := κ(x), and f is a computable function that does not dependon n.

Page 27: Lower Bounds In Kernelization

eorem

A problem admits a kernel if, and only if,it is fixed–parameter tractable.

Definition

A parameterized problem L is fixed-parameter tractable if there exists analgorithm that decides in f (k) · nO(1) time whether (x, k) ∈ L, wheren := |x|, k := κ(x), and f is a computable function that does not dependon n.

Page 28: Lower Bounds In Kernelization

eorem

A problem admits a kernel if, and only if,it is fixed–parameter tractable.

Given a kernel, a FPT algorithm is immediate (even brute–force on thekernel will lead to such an algorithm).

Page 29: Lower Bounds In Kernelization

eorem

A problem admits a kernel if, and only if,it is fixed–parameter tractable.

On the other hand, a FPT runtime of f(k)·nc gives us a f(k)–sized kernel.

We run the algorithm for nc+1 steps and either have a trivial kernel if thealgorithm stops, else:

nc+1 < f(k) · nc

Page 30: Lower Bounds In Kernelization

eorem

A problem admits a kernel if, and only if,it is fixed–parameter tractable.

On the other hand, a FPT runtime of f(k)·nc gives us a f(k)–sized kernel.

We run the algorithm for nc+1 steps and either have a trivial kernel if thealgorithm stops, else:

n < f(k)

Page 31: Lower Bounds In Kernelization

“Efficient” Kernelization

What is a reasonable notion of efficiency for kernelization?

e smaller, the better.

In particular,

Polynomial–sized kernels <arebetter than Exponential–sized Kernels

Page 32: Lower Bounds In Kernelization

“Efficient” Kernelization

What is a reasonable notion of efficiency for kernelization?e smaller, the better.

In particular,

Polynomial–sized kernels <arebetter than Exponential–sized Kernels

Page 33: Lower Bounds In Kernelization

“Efficient” Kernelization

What is a reasonable notion of efficiency for kernelization?e smaller, the better.

In particular,

Polynomial–sized kernels <arebetter than Exponential–sized Kernels

Page 34: Lower Bounds In Kernelization

e problem of finding Dominating Set of size k on graphs where thedegree is bounded by b, parameterized by k, has a linear kernel. is is anexample of a polynomial–sized kernel.

Page 35: Lower Bounds In Kernelization

PROBLEM KERNELS

whatwhyhow

when

no polynomial kernels

Examples

WorkarouNds

satisfiability disjoint factors

many kernels

vertex-disjoint cycles

Page 36: Lower Bounds In Kernelization

A composition algorithm A for a problem is designed to act as a fastBoolean OR of multiple problem-instances.

It receives as input a sequence of instances.x = (x1, . . . , xt) with xi ∈ {0, 1}∗ for i ∈ [t], such that

(k1 = k2 = · · · = kt = k)

It produces as output a yes-instance with a small parameter if and only ifat least one of the instances in the sequences is also a yes-instance.

κ(A(x)) = kO(1)

Running time polynomial in Σi∈[t]|xi|

Page 37: Lower Bounds In Kernelization

A composition algorithm A for a problem is designed to act as a fastBoolean OR of multiple problem-instances.

It receives as input a sequence of instances.x = (x1, . . . , xt) with xi ∈ {0, 1}∗ for i ∈ [t], such that

k1 = k2 = · · · = kt = k

It produces as output a yes-instance with a small parameter if and only ifat least one of the instances in the sequences is also a yes-instance.

κ(A(x)) = kO(1)

Running time polynomial in Σi∈[t]|xi|

Page 38: Lower Bounds In Kernelization

A composition algorithm A for a problem is designed to act as a fastBoolean OR of multiple problem-instances.

It receives as input a sequence of instances.x = (x1, . . . , xt) with xi ∈ {0, 1}∗ for i ∈ [t], such that

k1 = k2 = · · · = kt = k

It produces as output a yes-instance with a small parameter if and only ifat least one of the instances in the sequences is also a yes-instance.

k ′ = kO(1)

Running time polynomial in Σi∈[t]|xi|

Page 39: Lower Bounds In Kernelization

A composition algorithm A for a problem is designed to act as a fastBoolean OR of multiple problem-instances.

It receives as input a sequence of instances.x = (x1, . . . , xt) with xi ∈ {0, 1}∗ for i ∈ [t], such that

k1 = k2 = · · · = kt = k

It produces as output a yes-instance with a small parameter if and only ifat least one of the instances in the sequences is also a yes-instance.

k ′ = kO(1)

Running time polynomial in Σi∈[t]|xi|

Page 40: Lower Bounds In Kernelization

e Recipe for Hardness

Composition Algorithm + Polynomial Kernel

⇓Distillation Algorithm

⇓PH = Σ

p3

eorem. Let (P, k) be a compositional parameterized problem such thatP is NP-complete. If P has a polynomial kernel, then P also has a distil-lation algorithm.

Page 41: Lower Bounds In Kernelization

e Recipe for Hardness

Composition Algorithm + Polynomial Kernel

⇓Distillation Algorithm

⇓PH = Σ

p3

eorem. Let (P, k) be a compositional parameterized problem such thatP is NP-complete. If P has a polynomial kernel, then P also has a distil-lation algorithm.

Page 42: Lower Bounds In Kernelization

Transformations

Let (P, κ) and (Q, γ) be parameterized problems.

We say that there is a polynomial parameter transformation from P to Q ifthere exists a polynomial time computable function f : {0, 1}∗ −→ {0, 1}∗,and a polynomial p : N → N, such that, if f(x) = y, we have:

x ∈ P if and only if y ∈ Q,

and

γ(y) 6 p(κ(x))

Page 43: Lower Bounds In Kernelization

Transformations

Let (P, κ) and (Q, γ) be parameterized problems.

We say that there is a polynomial parameter transformation from P to Q ifthere exists a polynomial time computable function f : {0, 1}∗ −→ {0, 1}∗,and a polynomial p : N → N, such that, if f(x) = y, we have:

x ∈ P if and only if y ∈ Q,

and

γ(y) 6 p(κ(x))

Page 44: Lower Bounds In Kernelization

eorem: Suppose P is NP-complete, and Q ∈ NP. If f is a polynomialtime and parameter transformation from P to Q and Q has a polynomialkernel, then P has a polynomial kernel.

..

.f(x) = y

Instance of Q

.K(y)

.z ∈ P

.PPT Reduction

.NP–completeness

Reduction

.Kernelization

.x

Instance of P

Page 45: Lower Bounds In Kernelization

eorem: Suppose P is NP-complete, and Q ∈ NP. If f is a polynomialtime and parameter transformation from P to Q and Q has a polynomialkernel, then P has a polynomial kernel.

..

.f(x) = y

Instance of Q

.K(y)

.z ∈ P

.PPT Reduction

.NP–completeness

Reduction

.Kernelization

.x

Instance of P

Page 46: Lower Bounds In Kernelization

PROBLEM KERNELS

whatwhyhow

when

no polynomial kernels

Examples

WorkarouNds

satisfiability disjoint factors

many kernels

vertex-disjoint cycles

Page 47: Lower Bounds In Kernelization

Recall

A composition for a parameterized language (Q, κ) is required to“merge” instances

x1, x2, . . . , xt,

into a single instance x in polynomial time, such that κ(x) is polynomialin k := κ(xi) for any i.

..e output of the algorithm belongs to(Q,κ(x))

if, and only ifthere exists at least one i ∈ [t] for which xi ∈ (Q,κ).

Page 48: Lower Bounds In Kernelization

..A Composition Tree

.ρ(a, b)

.a

.ρ(x1, x2)

.x1 .x2

.ρ(x3, x4)

.x3 .x4

.

.b

.ρ(xt−3, xt−2)

.xt−3 .xt−2

.ρ(xt−1, xt)

.xt−1 .xt

General Framework For Composition

Page 49: Lower Bounds In Kernelization

..A Composition Tree

.ρ(a, b)

.a

.ρ(x1, x2)

.x1 .x2

.ρ(x3, x4)

.x3 .x4

.. . .

.b

.ρ(xt−3, xt−2)

.xt−3 .xt−2

.ρ(xt−1, xt)

.xt−1 .xt

General Framework For Composition

Page 50: Lower Bounds In Kernelization

..A Composition Tree

.ρ(a, b)

.a

.ρ(x1, x2)

.x1 .x2

.ρ(x3, x4)

.x3 .x4

.…

.b

.ρ(xt−3, xt−2)

.xt−3 .xt−2

.ρ(xt−1, xt)

.xt−1 .xt

General Framework For Composition

Page 51: Lower Bounds In Kernelization

..

Most composition algorithms can be statedin terms of a single operation, ρ,

that describes the dynamic programmingover this complete binary tree on t leaves.

Page 52: Lower Bounds In Kernelization

Weighted Satisfiability

Given a CNF formula φ,Is there a satisfying assignment of weight at most k?

Parameter: b+k.When the length of the longest clause is bounded by b,

there is an easy branching algorithm with runtime O(bk · p(n)).

Page 53: Lower Bounds In Kernelization

Weighted Satisfiability

Given a CNF formula φ,Is there a satisfying assignment of weight at most k?

Parameter: b+k.When the length of the longest clause is bounded by b,

there is an easy branching algorithm with runtime O(bk · p(n)).

Page 54: Lower Bounds In Kernelization

Weighted Satisfiability

.

.|bp(n)|||d|)

Given a CNF formula φ,Is there a satisfying assignment of weight at most k?

Parameter: b+k.When the length of the longest clause is bounded by b,

there is an easy branching algorithm with runtime O(bk · p(n)).

Page 55: Lower Bounds In Kernelization

Input: α1, α2, . . . , αt.

κ(αi) = b + k.n := maxi∈[n] ni

If t > bk, then solve every αi individually.

Total time = t · bk · p(n)

If not, t < bk – this gives us a bound on the number of instances.

Page 56: Lower Bounds In Kernelization

Input: α1, α2, . . . , αt.κ(αi) = b + k.

n := maxi∈[n] ni

If t > bk, then solve every αi individually.

Total time = t · bk · p(n)

If not, t < bk – this gives us a bound on the number of instances.

Page 57: Lower Bounds In Kernelization

Input: α1, α2, . . . , αt.κ(αi) = b + k.n := maxi∈[n] ni

If t > bk, then solve every αi individually.

Total time = t · bk · p(n)

If not, t < bk – this gives us a bound on the number of instances.

Page 58: Lower Bounds In Kernelization

Input: α1, α2, . . . , αt.κ(αi) = b + k.n := maxi∈[n] ni

If t > bk, then solve every αi individually.

Total time = t · bk · p(n)

If not, t < bk – this gives us a bound on the number of instances.

Page 59: Lower Bounds In Kernelization

Input: α1, α2, . . . , αt.κ(αi) = b + k.n := maxi∈[n] ni

If t > bk, then solve every αi individually.

Total time = t · bk · p(n) Total time = t · bk · p(n)

If not, t < bk – this gives us a bound on the number of instances.

Page 60: Lower Bounds In Kernelization

Input: α1, α2, . . . , αt.κ(αi) = b + k.n := maxi∈[n] ni

If t > bk, then solve every αi individually.

Total time = t · bk · p(n) Total time < t · t · p(n)

If not, t < bk – this gives us a bound on the number of instances.

Page 61: Lower Bounds In Kernelization

Input: α1, α2, . . . , αt.κ(αi) = b + k.n := maxi∈[n] ni

If t > bk, then solve every αi individually.

Total time = t · bk · p(n) Total time < t · t · p(n)

If not, t < bk – this gives us a bound on the number of instances.

Page 62: Lower Bounds In Kernelization

..Level 1

.(β1 ∨ b0) ∧ . . . ∧ (βn ∨ b0) .(β ′1 ∨ b̄0) ∧ . . . ∧ (β ′

m ∨ b̄0)

is is the scene at the leaves, where the βjs are the clauses in αi forsome i and the β ′

js are clauses of αi+1.

Page 63: Lower Bounds In Kernelization

..Level j

.(β1 ∨ b) ∧ . . . ∧ (βn ∨ b) .(β ′1 ∨ b̄) ∧ . . . ∧ (β ′

m ∨ b̄)

is is the scene at the leaves, where the βjs are the clauses in αi forsome i and the β ′

js are clauses of αi+1.

Page 64: Lower Bounds In Kernelization

..(β1 ∨ b) ∧ (β2 ∨ b) ∧ . . . ∧ (βn ∨ b) ∧

(β ′1 ∨ b̄) ∧ (β ′

2 ∨ b̄) ∧ . . . ∧ (β ′m ∨ b̄)

.(β1 ∨ b) ∧ . . . ∧ (βn ∨ b) .(β ′1 ∨ b̄) ∧ . . . ∧ (β ′

m ∨ b̄)

Take the conjunction of the formulas stored at the child nodes.Take the conjunction of the formulas stored at the child nodes.

Page 65: Lower Bounds In Kernelization

..(β1 ∨ b ∨ bj) ∧ (β2 ∨ b ∨ bj) ∧ . . . ∧ (βn ∨ b ∨ bj)∧

(β ′1 ∨ b̄ ∨ bj) ∧ (β ′

2 ∨ b̄ ∨ bj) ∧ . . . ∧ (β ′m ∨ b̄ ∨ bj)

.(β1 ∨ b) ∧ . . . ∧ (βn ∨ b) .(β ′1 ∨ b̄) ∧ . . . ∧ (β ′

m ∨ b̄)

where the βjs are the clauses in αi for some i and the.if the parent is a “left child”.

Page 66: Lower Bounds In Kernelization

..(β1 ∨ b ∨ b̄j) ∧ (β2 ∨ b ∨ b̄j) ∧ . . . ∧ (βn ∨ b ∨ b̄j)∧

(β ′1 ∨ b̄ ∨ b̄j) ∧ (β ′

2 ∨ b̄ ∨ b̄j) ∧ . . . ∧ (β ′m ∨ b̄ ∨ b̄j)

.(β1 ∨ b) ∧ . . . ∧ (βn ∨ b) .(β ′1 ∨ b̄) ∧ . . . ∧ (β ′

m ∨ b̄)

where the βjs are the clauses in αi for some i and the.if the parent is a “right child”.

Page 67: Lower Bounds In Kernelization

adding a suffix to “control the weight”

α

(c̄0 ∨ b̄0) ∧ (c0 ∨ b0)∧

(c̄1 ∨ b̄1) ∧ (c1 ∨ b1)∧

. . .

(c̄i ∨ b̄i) ∧ (ci ∨ bi)∧

. . .

(c̄l−1 ∨ b̄l−1) ∧ (cl−1 ∨ bl−1)

Page 68: Lower Bounds In Kernelization

Claim

e composed instance α has a satisfying assignment of weight 2k

⇐⇒at least one of the input instances admit a satisfying assignment of

weight k.

Page 69: Lower Bounds In Kernelization

Proof of Correctness

... . . ∧ (α ∨ (b0 ∨ b1 ∨ b̄2)) ∧ . . .

.. . . α ∨ b0 ∨ b1∨b̄2. . .

.. . . α ∨ b0∨ b1. . .

.. . . α ∨ b0 . . .

Page 70: Lower Bounds In Kernelization

Disjoint Factors

Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · ·wr ∈ L∗k is a substring wi · · ·wj ∈ L∗

k, with1 6 i < j 6 r, which starts and ends with the same letter.

Disjoint factors do not overlap in the word.Are there k disjoint factors?

Does the word have all the k factors, mutually disjoint?

Page 71: Lower Bounds In Kernelization

Disjoint FactorsLet Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · ·wr ∈ L∗k is a substring wi · · ·wj ∈ L∗

k, with1 6 i < j 6 r, which starts and ends with the same letter.

123235443513

Disjoint factors do not overlap in the word.Are there k disjoint factors?

Does the word have all the k factors, mutually disjoint?

Page 72: Lower Bounds In Kernelization

Disjoint FactorsLet Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · ·wr ∈ L∗k is a substring wi · · ·wj ∈ L∗

k, with1 6 i < j 6 r, which starts and ends with the same letter.

123235443513

Disjoint factors do not overlap in the word.Are there k disjoint factors?

Does the word have all the k factors, mutually disjoint?

Page 73: Lower Bounds In Kernelization

Disjoint FactorsLet Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · ·wr ∈ L∗k is a substring wi · · ·wj ∈ L∗

k, with1 6 i < j 6 r, which starts and ends with the same letter.

123235443513

Disjoint factors do not overlap in the word.Are there k disjoint factors?

Does the word have all the k factors, mutually disjoint?

Page 74: Lower Bounds In Kernelization

Disjoint FactorsLet Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · ·wr ∈ L∗k is a substring wi · · ·wj ∈ L∗

k, with1 6 i < j 6 r, which starts and ends with the same letter.

123235443513

Disjoint factors do not overlap in the word.Are there k disjoint factors?

Does the word have all the k factors, mutually disjoint?

Page 75: Lower Bounds In Kernelization

Disjoint FactorsLet Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · ·wr ∈ L∗k is a substring wi · · ·wj ∈ L∗

k, with1 6 i < j 6 r, which starts and ends with the same letter.

123235443513

Disjoint factors do not overlap in the word.Are there k disjoint factors?

Does the word have all the k factors, mutually disjoint?

Page 76: Lower Bounds In Kernelization

Disjoint FactorsLet Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · ·wr ∈ L∗k is a substring wi · · ·wj ∈ L∗

k, with1 6 i < j 6 r, which starts and ends with the same letter.

123235443513

Disjoint factors do not overlap in the word.Are there k disjoint factors?

Does the word have all the k factors, mutually disjoint?

Page 77: Lower Bounds In Kernelization

Disjoint FactorsLet Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · ·wr ∈ L∗k is a substring wi · · ·wj ∈ L∗

k, with1 6 i < j 6 r, which starts and ends with the same letter.

123235443513

Disjoint factors do not overlap in the word.Are there k disjoint factors?

Does the word have all the k factors, mutually disjoint?

Page 78: Lower Bounds In Kernelization

Disjoint FactorsLet Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · ·wr ∈ L∗k is a substring wi · · ·wj ∈ L∗

k, with1 6 i < j 6 r, which starts and ends with the same letter.

123235443513

Disjoint factors do not overlap in the word.Are there k disjoint factors?

Does the word have all the k factors, mutually disjoint?

Page 79: Lower Bounds In Kernelization

Disjoint FactorsLet Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · ·wr ∈ L∗k is a substring wi · · ·wj ∈ L∗

k, with1 6 i < j 6 r, which starts and ends with the same letter.

123235443513

Disjoint factors do not overlap in the word.Does the word have all the k factors, mutually disjoint?Does the word have all the k factors, mutually disjoint?

Page 80: Lower Bounds In Kernelization

Disjoint FactorsLet Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · ·wr ∈ L∗k is a substring wi · · ·wj ∈ L∗

k, with1 6 i < j 6 r, which starts and ends with the same letter.

123235443513

Disjoint factors do not overlap in the word.Does the word have all the k factors, mutually disjoint?

Parameter: k

Page 81: Lower Bounds In Kernelization

A k! · O(n) algorithm is immediate.

A 2k · p(n) algorithm can be obtained by Dynamic Programming.

Let t be the number of instances input to the composition algorithm.Again, the non-trivial case is when t < 2k.

Let w1, w2, . . . , wt be words over L∗k.

Page 82: Lower Bounds In Kernelization

A k! · O(n) algorithm is immediate.A 2k · p(n) algorithm can be obtained by Dynamic Programming.

Let t be the number of instances input to the composition algorithm.Again, the non-trivial case is when t < 2k.

Let w1, w2, . . . , wt be words over L∗k.

Page 83: Lower Bounds In Kernelization

A k! · O(n) algorithm is immediate.A 2k · p(n) algorithm can be obtained by Dynamic Programming.

Let t be the number of instances input to the composition algorithm.Again, the non-trivial case is when t < 2k.

Let w1, w2, . . . , wt be words over L∗k.

Page 84: Lower Bounds In Kernelization

A k! · O(n) algorithm is immediate.A 2k · p(n) algorithm can be obtained by Dynamic Programming.

Let t be the number of instances input to the composition algorithm.Again, the non-trivial case is when t < 2k.

Let w1, w2, . . . , wt be words over L∗k.

Page 85: Lower Bounds In Kernelization

..Leaf Nodes

.b0w1b0 .b0w2b0.. . . .b0wt−1b0 .b0wtb0

Page 86: Lower Bounds In Kernelization

Level j.

..bjbj−1ubj−1vbj−1bj

.bj−1ubj−1 .bj−1vbj−1

Page 87: Lower Bounds In Kernelization

Claim

e composed word has all the 2k disjoint factors

⇐⇒at least one of the input instances has all the k disjoint factors.

Page 88: Lower Bounds In Kernelization

Proof of Correctness

..b2b1b0pb0qb0b1b0rb0sb0b1b2b1b0wb0xb0b1b0yb0zb0b1b2

.b2b1b0pb0qb0b1b0rb0sb0b1b2

.b1b0pb0qb0b1

.b0pb0 .b0qb0

.b1b0rb0sb0b1

.b0rb0 .b0sb0

.b2b1b0wb0xb0b1b0yb0zb0b1b2

.b1b0wb0xb0b1

.b0wb0 .b0xb0

.b1b0pyb0zb0b1

.b0yb0 .b0zb0

Input words: p, q, r, s, w, x, y, z.

Page 89: Lower Bounds In Kernelization

Proof of Correctness

..b2b1b0pb0qb0b1b0rb0sb0b1b2b1b0wb0xb0b1b0yb0zb0b1b2

.b2b1b0pb0qb0b1b0rb0sb0b1b2

.b1b0pb0qb0b1

.b0pb0 .b0qb0

.b1b0rb0sb0b1

.b0rb0 .b0sb0

.b2b1b0wb0xb0b1b0yb0zb0b1b2

.b1b0wb0xb0b1

.b0wb0 .b0xb0

.b1b0pyb0zb0b1

.b0yb0 .b0zb0

Input words: p, q, r, s, w, x, y, z.

Page 90: Lower Bounds In Kernelization

Proof of Correctness

..b2b1b0pb0qb0b1b0rb0sb0b1b2b1b0wb0xb0b1b0yb0zb0b1b2

.b2b1b0pb0qb0b1b0rb0sb0b1b2

.b1b0pb0qb0b1

.b0pb0 .b0qb0

.b1b0rb0sb0b1

.b0rb0 .b0sb0

.b2b1b0wb0xb0b1b0yb0zb0b1b2

.b1b0wb0xb0b1

.b0wb0 .b0xb0

.b1b0pyb0zb0b1

.b0yb0 .b0zb0

Input words: p, q, r, s, w, x, y, z.

Page 91: Lower Bounds In Kernelization

Proof of Correctness

..b2b1b0pb0qb0b1b0rb0sb0b1b2b1b0wb0xb0b1b0yb0zb0b1b2

.b2b1b0pb0qb0b1b0rb0sb0b1b2

.b1b0pb0qb0b1

.b0pb0 .b0qb0

.b1b0rb0sb0b1

.b0rb0 .b0sb0

.b2b1b0wb0xb0b1b0yb0zb0b1b2

.b1b0wb0xb0b1

.b0wb0 .b0xb0

.b1b0pyb0zb0b1

.b0yb0 .b0zb0

Input words: p, q, r, s, w, x, y, z.

Page 92: Lower Bounds In Kernelization

Vertex-Disjoint CyclesInput: G = (V, E)

Question: Are there k vertex–disjoint cycles?Parameter: k.

Related problems:FVS, has a O(k2) kernel.Edge–Disjoint Cycles, has a O(k2 log2

k) kernel.

In contrast, Disjoint Factors transforms into Vertex–Disjoint Cycles inpolynomial time.

Page 93: Lower Bounds In Kernelization

Vertex-Disjoint CyclesInput: G = (V, E)

Question: Are there k vertex–disjoint cycles?

Parameter: k.

Related problems:FVS, has a O(k2) kernel.Edge–Disjoint Cycles, has a O(k2 log2

k) kernel.

In contrast, Disjoint Factors transforms into Vertex–Disjoint Cycles inpolynomial time.

Page 94: Lower Bounds In Kernelization

Vertex-Disjoint CyclesInput: G = (V, E)

Question: Are there k vertex–disjoint cycles?Parameter: k.

Related problems:FVS, has a O(k2) kernel.Edge–Disjoint Cycles, has a O(k2 log2

k) kernel.

In contrast, Disjoint Factors transforms into Vertex–Disjoint Cycles inpolynomial time.

Page 95: Lower Bounds In Kernelization

Vertex-Disjoint CyclesInput: G = (V, E)

Question: Are there k vertex–disjoint cycles?Parameter: k.

Related problems:FVS, has a O(k2) kernel.Edge–Disjoint Cycles, has a O(k2 log2

k) kernel.

In contrast, Disjoint Factors transforms into Vertex–Disjoint Cycles inpolynomial time.

Page 96: Lower Bounds In Kernelization

Vertex-Disjoint CyclesInput: G = (V, E)

Question: Are there k vertex–disjoint cycles?Parameter: k.

Related problems:FVS, has a O(k2) kernel.Edge–Disjoint Cycles, has a O(k2 log2

k) kernel.

In contrast, Disjoint Factors transforms into Vertex–Disjoint Cycles inpolynomial time.

Page 97: Lower Bounds In Kernelization

.

.w = 1123343422

.v1 .v2 .v3 .v4 .v5 .v6 .v7 .v8 .v9 .v10

.1 .1 .2 .3 .3 .4 .3 .4 .2 .2

.1 .2 .3 .4

Disjoint Factors 4ppt Disjoint Cycles

Page 98: Lower Bounds In Kernelization

.

.w = 1123343422

.v1 .v2 .v3 .v4 .v5 .v6 .v7 .v8 .v9 .v10

.1 .1 .2 .3 .3 .4 .3 .4 .2 .2

.1 .2 .3 .4

Claimw has all k disjoint factors ⇐⇒ Gw has k vertex–disjoint cycles.

Page 99: Lower Bounds In Kernelization

PROBLEM KERNELS

whatwhyhow

when

no polynomial kernels

Examples

WorkarouNds

satisfiability disjoint factors

many kernels

vertex-disjoint cycles

Page 100: Lower Bounds In Kernelization

No polynomial kernel?

Look for the best exponential or subexponential kernels...

... or build many polynomial kernels.

Page 101: Lower Bounds In Kernelization

No polynomial kernel?

Look for the best exponential or subexponential kernels...

... or build many polynomial kernels.

Page 102: Lower Bounds In Kernelization

No polynomial kernel?

Look for the best exponential or subexponential kernels...

... or build many polynomial kernels.

Page 103: Lower Bounds In Kernelization

Many polynomial kernels give us better algorithms:(k2

k

)2klogk

versus

p(n)ck

p(n) ·(

ck

k

)

Page 104: Lower Bounds In Kernelization

We say that a subdigraph T of a digraph D is an out–tree if T is anoriented tree with only one vertex r of in-degree zero (called the root).

e D M L O-B problem is to find anout-branching in a given digraph with the maximum number of leaves.

Parameter: k

Page 105: Lower Bounds In Kernelization

We say that a subdigraph T of a digraph D is an out-branching if T is anoriented spanning tree with only one vertex r of in-degree zero.

e D M L O-B problem is to find anout-branching in a given digraph with the maximum number of leaves.

Parameter: k

Page 106: Lower Bounds In Kernelization

We say that a subdigraph T of a digraph D is an out-branching if T is anoriented spanning tree with only one vertex r of in-degree zero.

e D M L O-B problem is to find anout-branching in a given digraph with the maximum number of leaves.

Parameter: k

Page 107: Lower Bounds In Kernelization

We say that a subdigraph T of a digraph D is an out-branching if T is anoriented spanning tree with only one vertex r of in-degree zero.

e D k–L O-B problem is to find anout-branching in a given digraph with at least k leaves.

Parameter: k

Page 108: Lower Bounds In Kernelization

We say that a subdigraph T of a digraph D is an out-branching if T is anoriented spanning tree with only one vertex r of in-degree zero.

e D k–L O-B problem is to find anout-branching in a given digraph with at least k leaves.

Parameter: k

Page 109: Lower Bounds In Kernelization

R k–L O–B admits a kernel of size O(k3).

k–L O–B does not admit a polynomial kernel (proofdeferred).

However, it clearly admits n polynomial kernels (try all possible choicesfor root, and apply the kernel for R k–L O-B.

Page 110: Lower Bounds In Kernelization

R k–L O–B admits a kernel of size O(k3).

k–L O–B does not admit a polynomial kernel (proofdeferred).

However, it clearly admits n polynomial kernels (try all possible choicesfor root, and apply the kernel for R k–L O-B.

Page 111: Lower Bounds In Kernelization

R k–L O–B admits a kernel of size O(k3).

k–L O–B does not admit a polynomial kernel (proofdeferred).

However, it clearly admits n polynomial kernels (try all possible choicesfor root, and apply the kernel for R k–L O–B.

Page 112: Lower Bounds In Kernelization

R k–L O–T admits a kernel of size O(k3).

k–L O–B does not admit a polynomial kernel (proofdeferred).

However, it clearly admits n polynomial kernels (try all possible choicesfor root, and apply the kernel for R k–L O-B.

Page 113: Lower Bounds In Kernelization

R k–L O–T admits a kernel of size O(k3).

k–L O–T does not admit a polynomial kernel (composition viadisjoint union).

However, it clearly admits n polynomial kernels (try all possible choicesfor root, and apply the kernel for R k–L O-B.

Page 114: Lower Bounds In Kernelization

R k–L O–T admits a kernel of size O(k3).

k–L O–T does not admit a polynomial kernel (composition viadisjoint union).

However, it clearly admits n polynomial kernels (try all possible choicesfor root, and apply the kernel for R k–L O–T.

Page 115: Lower Bounds In Kernelization

..(D,k)

.(D1, v1, k)

.(D ′1, v1, k)

.(W1, b1)

.(W1, bmax)

.(D2, v2, k)

.(D ′2, v2, k)

.(W2, b2)

.(W2, bmax)

.(D3, v3, k)

.(D ′3, v3, k)

.(W3, b3)

.(W3, bmax)

.· · ·

.· · ·

.· · ·

.(Dn, vn, k)

.(D ′n, vn, k)

.(Wn, bn)

.(Wn, bmax)

.(D ′, bmax + 1)

.(D ′′, k ′′)

.(D∗, k∗)

.(D ′, bmax + 1)

.Decompose k–Leaf Out–Tree into n

instances of Rooted k–Leaf Out–Tree

. Apply Kernelization(Known for Rooted k–Leaf Out-Tree)

.Reduce to instances of k–Leaf Out–Branching on Nice Willow Graphs(Using NP–completeness)

. WLOG(After Simple Modifications)

. Composition(produces an instance of k–Leaf Out Branching)

. Kernelization(Given By Assumption)

.NP–completeness reduction from k–Leaf Out Branchingto k–Leaf Out Tree)

Figure: e No Polynomial Kernel Argument for k-L O-B.

Page 116: Lower Bounds In Kernelization

..(D,k)

.(D1, v1, k)

.(D ′1, v1, k)

.(W1, bmax)

.(W1, bmax)

.(D2, v2, k)

.(D ′2, v2, k)

.(W2, bmax)

.(W2, bmax)

.(D3, v3, k)

.(D ′3, v3, k)

.(W3, bmax)

.(W3, bmax)

.· · ·

.· · ·

.· · ·

.(Dn, vn, k)

.(D ′n, vn, k)

.(Wn, bmax)

.(Wn, bmax)

.(D ′, bmax + 1)

.(D ′′, k ′′)

.(D∗, k∗)

.(D ′, bmax + 1)

.Decompose k–Leaf Out–Tree into n

instances of Rooted k–Leaf Out–Tree

. Apply Kernelization(Known for Rooted k–Leaf Out-Tree)

.Reduce to instances of k–Leaf Out–Branching on Nice Willow Graphs(Using NP–completeness)

. WLOG(After Simple Modifications)

. Composition(produces an instance of k–Leaf Out Branching)

. Kernelization(Given By Assumption)

.NP–completeness reduction (from k–Leaf Out Branchingto k–Leaf Out Tree)

Figure: e No Polynomial Kernel Argument for k-L O-B.

Page 117: Lower Bounds In Kernelization

.

.(D ′, bmax + 1)

.(D ′′, k ′′)

.(D∗, k∗)

.(D ′, bmax + 1)

. Composition(produces an instance of k–Leaf Out Branching)

. Kernelization(Given By Assumption)

.NP–completeness reduction from k–Leaf Out Branchingto k–Leaf Out Tree)

Page 118: Lower Bounds In Kernelization

concluding remarks

A polynomial pseudo–kernelization K pro-duces kernels whose size can be bounded by:

h(k) · n1−ε

where k is the parameter of the problem,and h is polynomial.

Page 119: Lower Bounds In Kernelization

concluding remarks

Analogous to the composition framework,there are algorithms (called Linear OR)whose existence helps us rule out polynomialpseudo–kernels.

Most compositional algorithms can be ex-tended to fit the definition of Linear OR.

Page 120: Lower Bounds In Kernelization

concluding remarks

Analogous to the composition framework,there are algorithms (called Linear OR)whose existence helps us rule out polynomialpseudo–kernels.

Most compositional algorithms can be ex-tended to fit the definition of Linear OR.

Page 121: Lower Bounds In Kernelization

concluding remarks

A strong kernel is one where the parameterof the kernelized instance is at most the pa-rameter of the original instance.

Mechanisms for ruling out strong kernels aresimpler and rely on self–reductions and thehypothesis that P 6= NP.

Example: k–Path is not likely to admit astrong kernel.

Page 122: Lower Bounds In Kernelization

concluding remarks

A strong kernel is one where the parameterof the kernelized instance is at most the pa-rameter of the original instance.

Mechanisms for ruling out strong kernels aresimpler and rely on self–reductions and thehypothesis that P 6= NP.

Example: k–Path is not likely to admit astrong kernel.

Page 123: Lower Bounds In Kernelization

concluding remarks

A strong kernel is one where the parameterof the kernelized instance is at most the pa-rameter of the original instance.

Mechanisms for ruling out strong kernels aresimpler and rely on self–reductions and thehypothesis that P 6= NP.

Example: k–Path is not likely to admit astrong kernel.

Page 124: Lower Bounds In Kernelization

Open Problems

ere are no known ways of inferring that a problem is unlikely to have akernel of size kc for some specific c.

Page 125: Lower Bounds In Kernelization

Open Problems

Can lower bound theorems be proved under some “well-believed”conjectures of parameterized complexity - for instance - FPT 6= XP, or,

FPT 6= W[t] for some t ∈ N+?

Page 126: Lower Bounds In Kernelization

Open Problems

A lower bound framework for ruling out p(k) · f(l)–sized kernels forproblems with two parameters (k, l) would be useful.

Page 127: Lower Bounds In Kernelization

Open Problems

“Many polynomial kernels” have been found only for the directedoutbranching problem. It would be interesting to apply the technique toother problems that are not expected to have polynomial–sized kernels.

Page 128: Lower Bounds In Kernelization

FIN