Nikodem J. Poplawski- Radial Motion Into an Einstein-Rosen Bridge
CS4018 Formal Models of Computation weeks 20-23 Computability and Complexity Kees van Deemter...
-
Upload
sebastian-hunt -
Category
Documents
-
view
213 -
download
0
Transcript of CS4018 Formal Models of Computation weeks 20-23 Computability and Complexity Kees van Deemter...
CS4018Formal Models of Computation
weeks 20-23
Computability and Complexity
Kees van Deemter
(partly based on lecture notes by Dirk Nikodem)
Third set of slides:Introduction to complexity
• Making algorithms more efficient
• Analysing algorithms using big-Oh notation. Function domination.
• The complexity of a problem
• The classes P, NP, NPC
Complexity
Famous book on this topic starts with three cartoons:(employee in boss’ office)
– “I can’t find an efficient solution. I guess I’m too dumb.”– “I can’t find an efficient solution. No efficient solution exists.”– “I can’t find an efficient solution, but neither can all these famous
people.” (easily adapted to illustrate computability too.)
M.R.Garey & D.S.Johnson (1979). “Computers and Intractability: A guide to the theory of NP-completeness”. Freeman & Company, New York
• Complexity is a large and growing area of work• Our treatment will be even more sketchy than
that of computability
• First: analysing some algorithms; the difference between polynomial and nonpolynomial algorithms
• Then an extended example: Natural Language Generation algorithms (Dale & Reiter paper, on the web)
Linking computability and complexity
• Computability: Can it be computed?
• Complexity: What is the “cost” of computing it? (time, memory)
• Can be measured in exact terms, e.g., processing time on a given computer; number of statements executed
• Measurements that are independent of platform are often preferable
First: Analysing algorithms
• A key distinction in complexity is between1. the complexity of an algorithm
2. the complexity of a problem ( i.e., “How complex is the most efficient algorithm that solves this problem?” )
• As for (1), let us look at the earlier algorithm for determining whether n is prime
Is n prime? (Old program)
program prime (input, output); var n, i : integer; begin read (n); i := 2; while (i<n) and (n mod i <> 0)
{n is not divisible by i} do i := i+1;
if i=n then writeln(“n is prime”) else writeln(“n is not prime”);
end
How long does this take?
Depends on input: 1. if 2 is a divisor of n then loop is executed only once.
(best case)
2. If n is prime then loop is executed n-1 times (worst case)
3. Generally, it takes longer for longer n
As for 1 and 2: we’re usually interested in worst-
case performance
How might the program be speeded up?
Possibilities:
• If even numbers are treated separately, then we only have to try odd divisors.So: start with 3 and always add 2
• No need to try divisors higher than n
So: while loop tests whether (i n div i)
The faster version:
program prime (input, output); var n, i : integer; begin read(n); {assume n > 3} if not odd(n) then writeln(false) else begin i := 3; while (i n div i) and (n mod i <> 0) do i := i+2; if i > n div i then writeln(“n is prime”)
else writeln(“n is not prime”); end; end.
Effects of changes
• Effect of these two changes: loop has to be executed only about 1/2 n times
• Obviously, that’s better than n-1 times, since 1/2n is always smaller than n-1 … for n>1
• More generally, we don’t worry about low n. If there are problems then they arise with high n.
• (We shall see later that the difference between 1/2n and n-1 is only minor)
Comparing functions
• Typically, cost is different for different inputs, which makes measuring more difficult.
• Suppose we know what n is, and that we measure worst-case costs. Four algorithms execute a given loop 100n, 2n2, n3, or 2n times respectively
• Which of these is cheapest? (E.g., checkn=5, n=10, n=20)
Comparing functions
n=5 n=10 n=20
100n 500 1000 2000
n3 125 1000 8000
6n2 150 600 2400
2n 32 106 109
Cheapest: 2n 6n2 100n
Costliest: 100n 2n 2n
Wanted:
• A yardstick for measuring the complexity of a function
Main complexity types:
1. Polynomial: 100n, n3, 2n2
(this includes linear: 100n)
2. Exponential: 2n
Polynomials in more detail:
p(n) = arnr + ar-1nr-1 +…+a0n0
(term becomes empty if ai = 0)
p(n) = arnr + ar-1nr-1 +…+a0n0
• Constants a are unimportant. E.g.,for some m, n>m (n3 > 2n2)
• A useful abstract notion: one function dominating another.
• “f dominated by g” means: for some c and m, n>m (c*g(n) f(n))(“There exists a constant c such that, for all n that are large enough, g(n) may be smaller than f(n), but only by that constant factor c”)
Some examples1. g=n3 dominates f=2n2, since
n>2(g(n)f(n))
2. g=2n3 + 5n2 dominates f=2n3, since n>0(g(n)f(n)) But the converse holds too! :
3. g=2n3 dominates f=2n3 + 5n2 , since c n>0 (c*g(n) f(n)). To see this, consider that f = 2n3 + 5n2 2n3 + 5n3 = 7 n3
Further terminology
• When g dominates f, we also write fO(g)(“f is big-Oh of g”, “f is in the order of g”)
• In the cases of interest, fO(g) and gO(f)• We try to measure the complexity of
functions, using simple functions as yardstick
• Hence, the interesting case is of fO(g)where g is as simple as can be. (In particular, simpler then f.)
For polynomial functions
p(n) = arnr + ar-1nr-1 +…+a0n0
• Theorem: p(n) is always dominated by its largest non-empty term:
p(n) O(nr )
• So, we “measure” p(n) using the largest relevant ni
• For example,
Some polynomial functions
p(n)=2n3 + 5n2
characterised by n3 (also: cubic)
p(n)=n2 + 500n – 1characterised by n2 (also: quadratic)
Formal notation using Big-Oh:
• p is O(n3)
• p is O(n2)
Big-Oh
• An interestingly “imprecise” way of comparing functions, focussing on how they behave for large inputs
• N.B. one can calculate with big-Oh numbers in a precise way. E.g.,
O(n3)+O(n2)=O(n3)
Calculate complexity of an algorithm based on complexity of its parts
Before analysing algorithms …
• Let’s look at the discussion in the lecture notes (under So What?)
1. Speed of computers doubles every year. Should we let some problems wait?
2. Hire cleverer programmers, to find more efficient algorithms?
3. Does this stuff have other applications?
4. Doesn’t dependence on computer and language make these calculations futile?
1. Speed of computers doubles every year. Should we let some problems wait?
• If running time grows very quickly (as the input size grows) then computer speed would have to grow very quickly as well to make a difference
• Suppose computers double in speed every year growing at rate of 2n
• If running time grows at 2n too then you can handle an input that’s 1 biggernext year (in the same time)
2. Hire cleverer programmers, to find more efficient algorithms?
• If your problem does not allow a faster solution then clever programming won’t help
• In other words: One would like to know about the complexity of a problem, rather than the complexity of a program
• That’s what real complexity theory is all about. It resembles computability. E.g.,
– Think about problems rather than programs– Reduce one problem to another
3. Does this stuff have other applications?
• Encryption is an example: modern methods rely on the fact that each non-prime integer x is the product of two integer y and z
– Given x and y, z is easy to calculate– Given x alone, y and z take extremely long
to calculate (when x is large)
Complexity theory has much to say about the safety of this encryption method
4. Doesn’t dependence on computer and language make these calculations futile?
• No: the speed of different computational models differs only by a polynomial factor:
• If Model 1 takes time f(n) and Model 2 takes time g(n)then f(n) p(g(n)), for some polynomial function p
(please take our word for it)
Analysing more complex algorithms
• Let’s look at some sorting algorithms
Example 1
{Simple Sort; also Bubblesort} for i := 1 to n do begin for j := i to n do begin if A[i] > A[j] then begin Exchange (A[i], A[j]); end end end
Inner loop: comparison is done
n-i+1 times; therefore the total is
n
i
in1
)1(
(n-1)+1 plus (n-2)+1 plus … plus (n-n)+1
Average value is ½ n+1. There are n values. So their sum is n*( ½ n+1 ) =½ (n2+n). That’s O(n2).
This is a quadratic-time algorithm
Exponential algorithms
• For large n, 2n > n2
• Exponential algorithms are always slower than polynomial ones.
• Exponential algorithms are called intractable (sometime: `unreasonable’ – bad term)
• An example: calculating the n-th fibonacci number. (Origins: calculating speed of growth of population of rats, 1200 CE)
• Fibonacci sequence defined: fib(1)=fib(2)=1 fib(n)=fib(n-1)+fib(n-2), if n>2
• A direct implementation:
function fib(n) if n<=2 then fib=1; else fib=fib(n-1)+fib(n-2)
This algorithm is exponential
calculating fib(4) involvescalculating fib(3) and fib(2), which involvescalculating fib(2) and fib(1) and fib(1), etc.
f4 f3 f2 f2 f1 f0 f1 f1 f0
This algorithm is exponential
• Let fib(n) take T(n) time then, for some a,b
T(1)=T(2)=a T(n)=b+T(n-1)+T(n-2)
• Hence T(n) > 2(T(n-2)), soT(n) is O(2n/2)
An iterative algorithm (y = the n-th fibonacci number)
If n=0 then y:=0 elsebegin x:=0 y:=1 for i:=1 to n-1 do begin z:=x+y x:=y y:=z endend
If n=0 then y:=0 elsebegin x:=0 {x = fib(0)} y:=1 {y = fib(1)} for i:=1 to n-1 do begin z:=x+y {z is a buffer} x:=y {x = fib(i)} y:=z {y = fib(i+1)} endend
If n>0 then fib(n) is calculated in n-1 loops
After one loop: z=fib(0)+fib(1)=1, x=fib(1)=1, y=z=fib(2)=1
After n-1 loops, x=fib(n-1), y=fib(n)
If n>0 then fib(n) is calculated in n-1 loops
This implies calculating n-1 additions a polynomial algorithm, in fact a linear one!
That’s what we’re looking for:polynomial algorithms
Polynomial has become equivalent with tractable
Question
• Can one assess the complexity of a problem (rather than the complexity of an algorithm)?
• In particular, are there problems that cannot be solved in polynomial time?
• The answer turns out to be subtle!
Example of a “probably-not-polynomial” problem
• Travelling salesman = TRAV• Given n towns, and a number of roads between
them, each with a length• Problem: find shortest path from start to start (if
there exists one), visiting every other town exactly once.– Can be made precise using labelled directed graphs.– Variants of the problem exists. (See lecture notes)
Are we certain that TRAV has no polynomial algorithm?
• No. – No proof has been found!
• There do exist problems that are quite transparent, and yet no polynomial algorithm has been found for them.
• Everyone assumes that problems like TRAV don’t have a polynomial-time solution – But everyone could be wrong!
A different example of a “probably not polynomial” problem
• A bounded version of Post’s Correspondence Problem (PCP)
(PS Different `bounds’ are possible)
Viewed as a puzzle:
Can pieces be joined in such a way that
top row symbols = bottom row symbols ?
1 2
XX
XXX
OXX
O
Reminder of PCP:
• Given n pairs of finite strings{ p1 = <s1
1,s12> ,…, pn = <sn
1,sn2> }
• Does there exist a sequence of pairs (possibly with iterations)
<pi1,…,pik>, such that
First(pi1) +…+ First(pik)=Second(pi1) +…+ Second(pik)
Bounded PCP
• Given n pairs of finite strings{ p1 = <s1
1,s12> ,…, pn = <sn
1,sn2> }
• Let K <= n
• Does there exist a sequence of pairs (possibly with iterations) <pi1,…,piK>, such thatFirst(pi1) +…+ First(piK)=Second(pi1) +…+ Second(piK)
• Going through all n*n-1*n-2*…*n-k possible orders does seem the only option
• We have to be cautious: someone might have thought the same about fib …
• Complexity theory: Reduce from the most transparent problems.
• Compare Computability: Reduction again, but complexity has no equivalent of the Halting problem (yet?)
A key distinction:
• The difficulty of finding a solution (i.e., a sequence, a path, etc.)
• The difficulty of checking that a proposed solution is correct– Hard: given large c,
find a and b such that a*b=c– Easy: given a, b and c,
check whether a*b=c.
Example: Bounded PCP
• Checking Bounded PCP is easy
• If someone proposes a sequence <pi1,…,piK>, then it’s easy to check whether
First(pi1) +…+ First(piK)= Second(pi1) +…+ Second(piK)
Some terminology
1. P = the class of problems for which there exists a polynomial algorithm
2. NP = the class of problems for which there exist polynomial-time checking algorithms
• The big open question: is P = NP?• Some problems Q are so central that all other
NP problems can be reduced from them. • This is the class of NP-Complete problems
(NPC). Solve any of these 1000 problems in polynomial time, then we know that P = NP.
We’ve seen two open questions:
• Infinity: is 0 = 1
• Complexity: is P = NP?
We’ve seen two open questions:
• Infinity: is 0 = 1
• Complexity: is P = NP?
• Solutions wellcome … in which case don’t worry about your exams ;)
Finally: extended discussion of a real problem
• The problem `lives’ in Artificial Intelligence
• More specifically Natural Language Generation (NLG)