Computational Complexity - Pseudocode and RecursionsComplexity analysis Recursions Solving recursion...
Transcript of Computational Complexity - Pseudocode and RecursionsComplexity analysis Recursions Solving recursion...
Complexity analysis Recursions Solving recursion without MT
Computational Complexity - Pseudocode andRecursions
Nicholas Mainardi1
Dipartimento di Elettronica e InformazionePolitecnico di Milano
May 21, 2020
1Partly Based on Alessandro Barenghi’s material, largely enriched with someadditional exercises
Complexity analysis Recursions Solving recursion without MT
More complexity analysis?
Asymptotic complexity assessment
In the following slides we will assess the asymptotic complexityof some algorithms using the constant cost criterion
This is justified by the fact that all the involved variables areconsidered to be bounded in size to, at most, a machine word,thus all the atomic operations are effectively O(1)
This lesson also tackles the issue of analysing the complexityof algorithms involving recursive function calls
Complexity analysis Recursions Solving recursion without MT
Pseudocode Time Complexity - Example 1P(n)
1 s ← 02 for i ← 1 to n3 do j ← 14 while j < n5 do k ← 16 while k < n7 do s + +8 k ← 3 ∗ k9 j ← 2 ∗ j
The first cycle runs n times
The first while loop runs until j < n, and j is doubled each time,that is until 2h < n⇒ h < log2(n)
The second while loop runs until k < n, and k is triplied each time,that is until 3h < n⇒ h < log3(n)
Total complexity: n ∗ log2(n) ∗ log3(n) = Θ(n log2(n))
Complexity analysis Recursions Solving recursion without MT
Pseudocode Time Complexity - Example 2
P(n)
1 i ← 22 j ← 03 while i < n4 do j + +5 i ← i ∗ i
How many times is the loop executed?
i starts from 2 and it is squared at each iteration
Th sequence of i values is: 2, 4, 16, 256, 216 . . .
The loop body is executed until i < n, that is until22
h< n⇒ 2h < log2(n)⇒ h < log2(log2(n))
Total time complexity: Θ(log(log(n)))
Complexity analysis Recursions Solving recursion without MT
Pseudocode Time Complexity - Example 3
P(n)
1 a← 02 j ← 13 k ← 04 while j < n5 do k + +6 for i ← 1 to k7 do h← 28 while h < 2n
9 do a + +10 h← h ∗ h11 j ← 2 ∗ j
The number of iterations of the for loop is dependent on theouter loop→ it depends on the number of times k is incremented
Complexity analysis Recursions Solving recursion without MT
Pseudocode Time Complexity - Example 3
Complexity Estimation in case of Dependent Nested Cycles
the outer loop is executed log(n) times, thus k can beincremented up to log(n)
At each iteration, the for loop runs k times. Total number ofexecutions of for loop body?∑log(n)
k=1 k = log(n)∗(log(n)+1)2 = Θ(log2(n))
Each of these executions has another loop
This loop is independent from the previous two. How manytimes does it run?
the loop runs until h < 2n, with h being squared at eachiterations, thus it runs until22
m< 2n ⇒ 2m < n⇒ m < log(n)
Total time complexity is Θ(log3(n))
Complexity analysis Recursions Solving recursion without MT
Pseudocode Time Complexity - Example 4
P(n)
1 sum← 02 for i ← 1 to log(n)3 do j ← 24 while j < 2n
5 do sum + +6 j ← 2 ∗ j
The outer loop is executed log(n) times
The inner loop is executed until j < 2n, with j being doubledat each iteration, thus it runs until 2h < 2n ⇒ h < n
Total time complexity: Θ(n log(n))
Complexity analysis Recursions Solving recursion without MT
Pseudocode Time Complexity - Example 4
Consider this variant:
P(n)
1 sum← 02 for i ← 1 to log(n)3 do j ← 24 while j < 2n
5 do sum← sum + f(n)6 j ← 2 ∗ j
where T (f(n)) ∈ Θ(log(n))
The number of iterations of the loops are the same as theoriginal version ⇒ Θ(n log(n))
However, each inner loop body has a logarithmic cost
Thus, total time complexity isΘ(n log(n) log(n)) = Θ(n log2(n))
Complexity analysis Recursions Solving recursion without MT
Pseudocode Time Complexity - Example 4
Another variant:
P(n)
1 sum← 02 for i ← 1 to log(n)3 do j ← 24 while j < 2n
5 do sum← sum + f(i)6 j ← 2 ∗ j
where T (f(n)) ∈ Θ(log(n))
This time the complexity of the inner loop body is dependentfrom the outer cycle
The total complexity is T (n) =∑log(n)
i=1 n log(i)
Complexity analysis Recursions Solving recursion without MT
Pseudocode Time Complexity - Example 4
A bit of math
T (n) =∑log(n)
i=1 n log(i) = n∑log(n)
i=1 log(i) = n log(∏log(n)
i=1 i),by applying logarithm properties
T (n) = n log(∏log(n)
i=1 i) = n log(log(n)!)
Recall the Stirling approximation to get rid of the factorial:n! ≈
√2πn(ne )n
T (n) = n log(log(n)!) = n log(√
2π log(n)( log(n)e )log(n))
Apply logarithm properties:T (n) = n log(
√2π log(n)( log(n)e )log(n)) =
Θ(n log(√
2π log(n)) + n log(log(n)log(n))) =
Θ(n log(log(n)12 ) + n log(n) log(log(n))) =
Θ(n 12 log(log(n)) + n log(n) log(log(n))) =
Θ(n log(n) log(log(n)))
Complexity analysis Recursions Solving recursion without MT
Computing exponentiations: mn
Straightforward implementation
We want to compute mn: the following function does so:
Exp(n,m)
1 res ← 12 for i ← 1 to n3 do res ← res ×m4 return res
Complexity? the loop runs exactly n times, thus O(n)
Is there a fastest algorithm to compute exponentiation?
Complexity analysis Recursions Solving recursion without MT
Way faster exponentiations
Another look at the exponent
We can rewrite the exponent n as a t = dlog2(n)e bit binarynumber n = (bt−1 . . . b0)
Recalling common mathematical propertiesmn = m(2t−1bt−1+2t−2bt−2+...+20b0) =m2t−1bt−1 ·m2t−2bt−2 · . . . ·m21b1 ·m20b0
We note that mn is actually the product of all the m2i forwhich the i-th bit is one
All the m2i values can be obtained by squaring m2i−1, at the
cost of a single multiplication
Exploiting this observation we can design an algorithm whichcomputes them and multiplies together only the ones wherebi = 1
Complexity analysis Recursions Solving recursion without MT
Right-to-Left Square and Multiply
Algorithm and Complexity
SMExp(n,m)
1 res ← 12 tmp ← m3 for i ← 0 to (t − 1)4 do if n& (1 << i) 6= 0 It is the same as bi = 15 then res ← res × tmp6 tmp ← tmp × tmp7 return res
Complexity? The loop body is run t = dlog2(n)e times:O(log(n)), way faster than before!
Complexity analysis Recursions Solving recursion without MT
Computing Hamming Weight
Straightforward Method
The Hamming Weight of n is defined as the number of onesin its binary representation
HW(n)
1 tmp ← n2 hw ← 03 while tmp > 04 do if tmp mod 2 = 15 then hw ← hw + 16 tmp ← b tmp
2 c7 return hw
Complexity? Loop until the result of the repeated integerdivision by 2 of the input is 0
The loop is run log2(n) times, thus O(log(n))
Complexity analysis Recursions Solving recursion without MT
Computing HW, Kernighan Style
Arithmetic helps
Is the previous method the best we can do? No!
HWKernighan(n)
1 tmp ← n2 hw ← 03 while tmp > 04 do hw ← hw + 15 tmp ← tmp&(tmp − 1)6 return hw
Line 5 effectively removes only the least significant digit set to1 of the number in O(k) exploiting the effect of the borrows
Thus the loop runs exactly as many times as the Hammingweight of the input, thus Θ(HW (n))
Complexity analysis Recursions Solving recursion without MT
Is this a perfect power?
Given n, is it a perfect power n = ax , x ∈ N \ 0, 1?Strategy 1:
We try all possible bases k , from k = 2 to√n
How many exponents exp do we need to try? untilkexp ≤ n⇒ exp ≤ logk(n)
Complexity:∑√n
k=2 logk(n) = O(√n log(n))
Strategy 2:
We try all possible exponents exp, up to log2(n)How many bases do we need to try for each exp? untilkexp ≤ n⇒ k ≤ exp
√n
At each iteration, we perform an exponentiation O(log(exp))
Complexity:∑log2(n)
exp=2exp√n log(exp) =
√n +
∑log2(n)exp=3
exp√n log(exp). The second term is
O( 3√n log(n) log(log(n))) and thus the complexity is√
n + O( 3√n log(n) log(log(n))) = O(
√n)
Complexity analysis Recursions Solving recursion without MT
Is this a perfect power?
A better solution
is-n-th-pow(x)
1 for exp ← 2 to blog2(n)c2 do k ← dn2e3 kmax ← n4 kmin ← 05 while k 6= kmax
6 do test ← kexp
7 if test = n8 then return true9 if test > n
10 then kmax ← k11 else kmin ← k
12 k ← dkmax+kmin2 e
13 return false
Complexity analysis Recursions Solving recursion without MT
Is this a perfect power?
Complexity
The outer for loop runs log2(n)− 1 times by construction
The inner while is slightly more complex to analyze:examine the evolution of the kmin and kmax variables
The viable range for the value of k is cut into a half at eachloop body execution, kmax retains the upper bound
Since the value of k is always an integer (initial value n):log2(n) runs are made
The main cost of the loop body is the exponentiation: takeslog2(exp)
The total complexity is∑log2(n)
exp=2 log2(n) log(exp) =
log2(n)∑log2(n)
exp=2 log(exp) = O(log2(n) log(log(n)))
Complexity analysis Recursions Solving recursion without MT
Multiple Precision Multiplications
The Roman Way
Multiplying two n-digit numbers a = (an−1, . . . , a0)t ,b = (bn−1, . . . , b0)t represented in base t (think 10, or 2) wasa problem in ancient Rome, as they did it like this:
Multiply(a, b)
1 res ← (02n−1, 02n−2, . . . , 01, 00) res can be up to 2n digits2 for i ← 0 to b − 13 do res ← AddWithCarry(res, a)4 return res
The call to AddWithCarry needs to take care of all thepossible carries, thus takes O(n). Total complexity?
The for loop runs as many times as the size of b: Worst case,b is n-digits in base t, thus b ≈ tn: O(ntn)
Complexity analysis Recursions Solving recursion without MT
Back to first grade school
Multiple Digits Multiplications
We were taught to go faster than ancient Romans in firstgrade school, namely we do :
Multiply(a, b)
1 res ← (02n−1, 02n−2, . . . , 01, 00)2 for i ← 0 to n − 13 do tmp = 04 for j ← 0 to n − 15 do prod ← aj ∗ bi6 tmp ← tmp + (prod × t i+j)7 res ← AddWithCarry(res, tmp)8 return res
Complexity analysis Recursions Solving recursion without MT
Back to first grade school
Time Complexity for Schoolbook Multiplication
Each iteration of the inner loop executes in constant time:
prod is at most 2 digitsLine 6 sums prod to the 2 digits tmpi+j , tmpi+j+1
No carries generated as tmpi+j+1 = 0
The inner loop runs n times, thus it costs O(n)
Each iteration of the outer loop requires O(n) for the innerloop and O(n) for AddWithCarry⇒ O(n)complexity
The outer loop runs n times each, the loop body is O(n), thusO(n × n) = O(n2) Better!
Similarly for multiple digits divisions:
Division with repeated subtractions is O(ntn).
Schoolbook division is O(n2)
Complexity analysis Recursions Solving recursion without MT
Improving Multiplication
Karatsuba’s algorithm
Karatsubamult(a, b, n)
1 a, b are integers in base t with n digits2 if n = 13 then return a ∗ b4 m← bn2c5 Shift operations are performed over digits of the integers6 higha ← a >> m7 lowa ← a− (higha << m)8 highb ← b >> m9 lowb ← b − (highb << m)
10 z0 ← karatsubamult(lowa, lowb,m)11 z1 ← karatsubamult(higha + lowa, highb + lowb,m + 1)12 z2 ← karatsubamult(higha, highb,m)13 return z2 << 2m + (z1 − z2 − z0) << m + z0
Complexity analysis Recursions Solving recursion without MT
Improving Multiplication
Why Does It Work?
a = higha · tm + lowa and b = highb · tm + lowb
a · b = higha · tm · highb · tm + lowa · highb · tm + higha · lowb ·tm + lowa · lowb = z2 · t2m + tm · (lowa ·highb +higha · lowb) +z0
Now, we show that higha · lowb + lowa · highb = z1 − z2 − z0:z1 = (higha+lowa)·(highb+lowb) = higha·highb+higha·lowb+lowa · highb + lowa · lowb = z2 + higha · lowb + lowa · highb + z0
a · b = z2 · t2m + tm · (lowa · highb + higha · lowb) + z0 =z2 · t2m + tm · (z1 − z2 − z0) + z0
We can perform the multiplication with 3 multiplications onhalf digits numbers, plus some shift/additions, until we get asingle digit multiplication, which stops the recursion
Basically, only single digit multiplications are performed andthe results are composed using the equation in line 13
Complexity analysis Recursions Solving recursion without MT
Improving Multiplication
Complexity
The algorithm is recursive!
Recursion always involve sub-problems, which has the samecomplexity function of the main problem
Thus, we can formulate the time complexity as a function ofthe costs of the sub-problems, defining the so-calledrecurrence equation
How many sub-problems? 3
What size of these sub-problems? dn2eWhat is the cost of the function without considering thesub-problems? Θ(n), since only shifts and additions areperformed
Thus, T (n) = 3T (dn2e) + Θ(n)
Complexity analysis Recursions Solving recursion without MT
Analyzing recursions
How to solve recursions?
The previous time complexity is also stated in a recurrentfashion: we need to solve the issue
There are two main ways to do so:
Master’s theorem: provided the recurrence fits certainconditions, easy, closed form solutions are availableHypothesize and prove by induction: we obtain an educatedguess on the total complexity through partially unrolling therecursion, and subsequently prove that our conjecture iscorrect by means of mathematical induction
Complexity analysis Recursions Solving recursion without MT
Master’s Theorem
A smart toolbox
The master’s theorem provides a ready-made toolbox for thecomplexity analysis of recursions.
The recursion complexity must be expressible in this form :T (n) = aT (nb ) + f (n) with a ≥ 1, b > 1
Key idea: compare the complexities of nlogb(a) (the effect ofthe recursive calls) and f (n) (the cost of running the function)
Hypotheses for the Master’s theorem:
a must be constant, and greater or equal to one (at least onesub-problem per recursion)f (n) must be added, not subtracted or anything else to aT ( n
b )
The difference between nlogb(a) and f (n) should be polynomial
The solutions provided by the MT are split into three cases
Complexity analysis Recursions Solving recursion without MT
Master’s Theorem
Case 1
Case 1 : f (n) = O(nlogb(a)−ε) for some ε > 0
Result : T (n) = Θ(nlogb(a))
Key idea : the recursion dominates the complexity
Example : T (n) = 32T (n4 ) + n log(n)
Compare : O(nlog4(32)−ε) = n log(n)⇒ O(n52−ε) =
n log(n)⇒ ε = 12 X
The complexity of the example is Θ(nlog4(32)) = Θ(n52 )
Complexity analysis Recursions Solving recursion without MT
Master’s Theorem
Case 2
Case 2 : f (n) = Θ(nlogb(a))
Result : T (n) = Θ(nlogb(a) log(n))
Key idea : the “weights” of the two parts of the sum are thesame up to a polynomial term
Example: T (n) = T (n3 ) + Θ(1)
Compare: Θ(1) = Θ(nlog3(1))?
Yes : Θ(1) = Θ(n0) X
The complexity of the example isΘ(nlog3(1) log(n)) = Θ(log(n))
Complexity analysis Recursions Solving recursion without MT
Master’s Theorem
Case 3
Case 3 : f (n) = Ω(nlogb(a)+ε) , ε > 0
In this case, for the theorem to hold, we need also to checkthat : af (nb ) ≤ cf (n) for some c < 1
Result : T (n) = Θ(f (n))
Key idea : the function call at each recursion step outweightsthe rest
Example T (n) = 2T (n4 ) + n log(n)
Compare n log(n) = Ω(nlog4(2)+ε)⇒ n log(n) = Ω(n12+ε)⇒
ε = 12 > 0 X
Check 2f (n4 ) = 2n4 log(n4 ) ≤ cn log(n) for c < 1?
2 n4 log( n
4 ) = n2 log(n)− n
2 log(4) < n2 log(n) ≤ cn log(n) for
c ∈ [ 12 ; 1) X
The complexity of the example is Θ(n log(n))
Complexity analysis Recursions Solving recursion without MT
Improving Multiplication
Karatsuba Algorithm
Let’s go back to Karatsuba’s recurrence:T (n) = 3T (dn2e) + Θ(n)
Can we solve it through master theorem?
Yes! a = 3, b = 2 and f (n) = Θ(n)
Which case?
nlogb(a) = nlog2(3).O(nlog2(3)−ε) = n⇒ log2(3)− ε >= 1⇒ ε = log2(3)− 1 > 0,thus it is case 1 of the theorem
Therefore, T (n) = Θ(nlogb(a)) = Θ(nlog2(3)) = Θ(n1.585)
Complexity analysis Recursions Solving recursion without MT
When being a Master is not enough...
Alternatives to Master’s Theorem
Some cases where the MT is not applicable are:1 T (n) = 2T (n − 3) + Θ(log(n))
b ≯ 1, the MT needs b > 1
2 T (n) = 3T (log(n)) + 2n
The recursion argument is not polynomial (so , for sure b ≯ 1)
3 T (n) = nT (n − 2) + Θ(1)
a is not constant (a = n in this case)
We will now solve them via inductive proofs
Complexity analysis Recursions Solving recursion without MT
Solutions
Example 1 - Conjecture
Let’s try expanding a step of the recursionT (n) = 2T (n − 3) + Θ(log(n))
T (n) = 2(2T (n − 6) + Θ(log(n − 3))) + Θ(log(n))
Each recursion step adds a term which is 2i log(n − 3i))
The number of recursive calls is n3 (remember division by
iterated subtraction?)
The expanded equation is O(∑ n
3i=0 2i log(n − 3i))
We thus conjecture that T (n) is O(2n3 log(n))
Complexity analysis Recursions Solving recursion without MT
Solutions
Example 1 - Proof
We want to prove that T (n) is O(2n3 log(n)), i.e.
T (n) ≤ c2n3 log(n)
By induction, assume T (n − 3) ≤ c2n−33 log(n − 3) (this is a
reasonable induction hp, as clearly (n − 3) < n)
Substitute the previous one in the original recurrence equationT (n) = 2T (n − 3) + d log(n), yielding
T (n) = 2T (n− 3) + d log(n) ≤ c2 · 2n−33 log(n− 3) + d log(n)
Manipulating : T (n) ≤ c2 · 2−1 · 2n3 log(n − 3) + d log(n)
As clearly log(n − 3) < log(n), we get to:T (n) ≤ c2
n3 log(n) + d log(n)
Can we drop d log(n)?
Complexity analysis Recursions Solving recursion without MT
Solutions
Example 1 - Proof (Continued)
We can drop log(n) with a trick:
Let’s change our thesis to be proven to:T (n) ≤ c2
n3 log(n)− bn − a ∈ O(2
n3 log(n))
Induction hypothesis becomes:
T (n − 3) ≤ c2n−33 log(n − 3)− b(n − 3)− a
T (n) = 2T (n−3)+d log(n) ≤ 2c2n−33 log(n−3)−2b(n−3)−
2a+d log(n) = 2·2−1c2n3 log(n−3)−2bn+6b−2a+d log(n) ≤
c2n3 log(n)− 2bn + 6b − 2a + dn
Now, if d − 2b ≤ −b and 6b − 2a ≤ −a, we get:c2
n3 log(n)− 2bn + dn + 6b − 2a ≤ c2
n3 log(n)− bn − a
As a conclusion, T (n) ≤ c2n3 log(n)− bn − a, if
d − 2b ≤ −b ⇒ b ≥ d and 6b − 2a ≤ −a⇒ a ≥ 6b
Complexity analysis Recursions Solving recursion without MT
Solution
Example 1 - Smaller Upper Bound?
Let’s look at the proof: Do we really exploit the log(n) termmultiplied to 2
n3 ?
Maybe T (n) ∈ O(2n3 ). Let’s try to prove it!
Thesis: T (n) ≤ c2n3 − bn − a ∈ O(2
n3 )
Induction hypothesis: T (n − 3) ≤ c2n−33 − b(n − 3)− a
T (n) = 2T (n − 3) + d log(n) ≤2c2
n−33 − 2b(n − 3)− 2a + d log(n) =
2·2−1c2n3 −2bn+6b−2a+d log(n) ≤ c2
n3 −2bn+6b−2a+dn
As in the previous proof, eventually:T (n) ≤ c2
n3 − 2bn + 6b − 2a + dn ≤ c2
n3 − bn − a, if
d − 2b ≤ −b ⇒ b ≥ d and 6b − 2a ≤ −a⇒ a ≥ 6b
T (n) ∈ O(2n3 )!
Complexity analysis Recursions Solving recursion without MT
Solutions
Example 2 - Conjecture
Start by expand a step of T (n) = 3T (log(n)) + 2n
T (n) = 3(3T (log(log(n))) + 2log(n)) + 2n
At each recursion step, the added contribution is a term oflower order w.r.t. the ones already present
This time we can already stop the analysis now and say thatT (n) = O(2n) as all other added terms are negligible w.r.t. 2n
More precisely, if the number of steps of the recursion is a
function f (n), then T (n) = Θ(∑f (n)
i=0 3i2logi (n)), where logi
denotes the logarithm function repeatedly applied i times
f (n) is surely less than log(n) as we have log(n) recursive stepwhen the size of the problem is simply halved, thus
T (n) = O(∑log(n)
i=0 3i2logi (n)) = O(2n)
Complexity analysis Recursions Solving recursion without MT
Solutions
Example 2 - Proof
Thesis: T (n) ≤ c2n ∈ O(2n)
Induction Hypothesis: T (log(n)) ≤ c2log(n) = cnlog(2) ≤ cn
T (n) = 3T (log(n)) + 2n ≤ 3cn + 2n ≤ 3c2n + 2n
If 3c + 1 ≤ c , we can get to our desired result
But 3c + 1 ≤ c ⇐⇒ c ≤ −12 , but c > 0
Let’s try another transformation: T (n) ≤ 3cn + 2n ≤ c22n + 2n
This is legitimate since there exists n0 = 5 such that∀n ≥ n0( c22n ≥ 3cn)
Now, c2 + 1 ≤ c ⇐⇒ c ≥ 2, which is ok
Hence, we can do: T (n) ≤ c22n + 2n ≤ c2n
Complexity analysis Recursions Solving recursion without MT
Solutions
Example 3 - Conjecture
Again, expand a step of T (n) = nT (n − 2) + Θ(1)
T (n) = n((n − 2)T (n − 4) + Θ(1)) + Θ(1) =(n2 − 2n)T (n − 4) + Θ(n) + Θ(1) =(n2 − 2n)((n − 4)T (n − 6) + Θ(1)) + Θ(n) + Θ(1) =(n3 − 6n2 + 8n)T (n − 6) + Θ(n2) + Θ(n) + Θ(1)
At every recursion step, a term polynomially greater than theprevious is added.
The recursion is n2 steps deep
The dominating complexity is the one added at the last step
As each step raises the polynomial complexity by one, thefinal complexity is O(n
n2 )
Complexity analysis Recursions Solving recursion without MT
Solutions
Example 3 - Proof
Again, proof by induction, assume T (n − 2) ≤ c(n − 2)(n−2)
2
Substitute and obtainT (n) = nT (n − 2) + Θ(1) ≤ n(c(n − 2)
(n−2)2 ) + d
Manipulate a bit and get T (n) ≤ cn(n − 2)(n−2)
2 + d
Note that cn(n − 2)(n−2)
2 < cnn(n−2)
2 = cnn2
Plug it in the previous inequality and obtain, T (n) ≤ cnn2 + d
Change the thesis to get rid of d : T (n) ≤ cnn2 − b
T (n) ≤ cn(n−2)n−22 −bn+d ≤ cn
n2 −bn+d ≤ cn
n2 −2b+d ,
since −bn ≤ −2b for n ≥ n0 = 2
If −2b + d ≤ −b ⇔ b ≥ d , thenT (n) ≤ cn
n2 − 2b + d ≤ cn
n2 − b
Complexity analysis Recursions Solving recursion without MT
Naive Sorting
Simple idea for a sorting algorithm: Compute the minimum of thearray, swap it with the first element of the array, then do the sameon the sub-array starting from the second element:
Naive Sorting Algorithm
SelectionSort(a, start)
1 min← a[start]2 imin ← start3 for j ← start + 1 to Length(a)4 do if a[j ] < min5 then min← a[j ]6 imin ← j7 a[imin]← a[start]8 a[start]← min9 if start + 1 < Length(a)
10 then SelectionSort(a, start + 1)
Complexity analysis Recursions Solving recursion without MT
Naive Sorting
Complexity: Recurrence Equation
The algorithm is recursive
Each time a sub-array of size n − 1 is considered ⇒ T (n − 1)
1 recursive call
Each recursion step scan the sub-array ⇒ Θ(n)
Thus, recurrence equation is T (n) = T (n − 1) + Θ(n)
Complexity: Determining T (n)
Can we employ master theorem? No, since b = 0
Let’s make a guess on T (n)
How many recursive calls? ⇒ n
Each of them adds to the time complexity n − i
Thus, T (n) =∑n−1
i=0 n − i = n(n−1)2 = O(n2)
Complexity analysis Recursions Solving recursion without MT
Naive Sorting
Proof
Again, our thesis is T (n) ≤ cn2
Induction hypothesis: T (n − 1) ≤ c(n − 1)2
T (n) = T (n − 1) + kn ≤ c(n − 1)2 + kn =cn2 + c − 2cn + kn ≤ cn2 + cn − 2cn + kn = cn2 − cn + kn
If k − c ≤ 0⇒ c ≥ k , then we got:T (n) ≤ cn2 − cn + kn ≤ cn2
Is T (n) ∈ O(n log(n))?
Now, our thesis is T (n) ≤ cn log(n)
Induction hypothesis: T (n − 1) ≤ c(n − 1) log(n − 1)
T (n) ≤ c(n − 1) log(n − 1) + kn ≤ cn log(n) + kn
Can we get rid of kn?
Complexity analysis Recursions Solving recursion without MT
Naive Sorting
Let’s try the usual trick: T (n) ≤ cn log(n)− bn
Induction hp: T (n − 1) ≤ c(n − 1) log(n − 1)− b(n − 1)
T (n) ≤ c(n−1) log(n−1)−bn+b+kn ≤ cn log(n)−bn+b+kn
To get −bn, we need to have k − b < −b, which holds fork < 0, but k > 0⇒ impossible
Idea: we may try to introduce a new linear term hn ≥ b:T (n) ≤ cn log(n)− bn + hn + kn + b, for some h > 0.
To get −bn, we need to have k − b + h < −b ⇒ k + h < 0,which has no solution since k , h > 0
Can we avoid getting rid of n − 1 and using the term−c log(n) to get another linear term?
1 −c log(n) ≤ −cn does not hold for c > 02 −c log(n) ≤ cn is ok, but we would get c + k < 0, which has
still no solution
Complexity analysis Recursions Solving recursion without MT
Naive Sorting
In conclusion, we cannot get rid of the kn term in the recurrence!
This is the reason why we do not erase terms with lower orderin these proofs
In this case, linear terms summed up to make the complexityΩ(n2)
Proving Lower Bound
Let’s prove that T (n) = Ω(n2), that is T (n) ≥ cn2, c > 0
Inductive hypothesis: T (n − 1) ≥ c(n − 1)2 = cn2 + c − 2cn
Proof: T (n) = T (n − 1) + kn ≥ cn2 + c − 2cn + kn >cn2 − 2cn + kn ≥ cn2 if −2c + k ≥ 0
In conclusion, T (n) ≥ cn2 if −2c + k ≥ 0 ⇐⇒ c ≤ k2
Complexity analysis Recursions Solving recursion without MT
Differences on Proofs
Consider the following recurrence equations:
T (n) = 2T (n2 ) + n
T (n) = T (n2 ) + n
Are they O(n)?
Equation 1
T (n) ≤ 2c n2 + n ≤ cn + n
We cannot get rid of n, since c + 1 > c∀cIndeed, this is the merge sort recurrence, which is Ω(n log(n))
Equation 2
T (n) ≤ c n2 + n = c
2n + n
Now, if we impose c2 + 1 ≤ c ⇒ c ≥ 2, we can state
c2n + n ≤ cn, thus T (n) ≤ cn
Complexity analysis Recursions Solving recursion without MT
Induction Proofs: Summing Up
Recap of all the Tricks in Proofs
Change the thesis to try to get rid of lower order terms ⇒T (n) ≤ cn becomes T (n) ≤ cn − b
Play with constants ⇒ Suppose our thesis is T (n) ≤ cn, andwe got to T (n) ≤ c
2n. We can get to thesis since c2 ≤ c
If a term is added, you can replace it with an higher orderterm and an arbitrary constant (usually we want to decreasethe constant factor) ⇒ T (n) ≤ cn + 2n ≤ c
22n + 2n ≤ c2n, forc ≥ 2
If a term is subtracted, you can replace it with a lower orderterm and an arbitrary constant (usually we want to increasethe constant factor) ⇒T (n) ≤ cn2 − bn + k ≤ cn2 − 2b + k ≤ cn2 − b, for b ≥ k
Complexity analysis Recursions Solving recursion without MT
Greatest Common Divisor
Compute the greatest common divisor between 2 integers a, b
Suppose, as a precondition, a ≥ b
Naive idea: Try all possible integers starting from b to 1
The Algorithm
GCD(a, b)
1 for ı← b downto 12 do if a mod i = 0 ∧ b mod i = 03 then return i
Complexity: the loop runs at most b times, one for each candidatedivisor ⇒ O(b)
Complexity analysis Recursions Solving recursion without MT
Greatest Common Divisor
A clever solution: Euclidean algorithm!
Main idea: gcd(a, b) = gcd(b, a mod b). Since a mod b isless than b, we reduce the size of the numbers, eventuallygetting to multiples.
Euclidean Algorithm
EuclideanGCD(a, b)
1 r ← a mod b2 n← b3 d ← b4 while r 6= 05 do d ← r6 r ← n mod d7 n← d8 return d
Complexity analysis Recursions Solving recursion without MT
Greatest Common Divisor
Complexity
The time complexity is surely dominated by the loop
How many times does it run?
Different cases:
If gcd(a, b) = b, the algorithm does not even get into the loopThere may be two steps even if gcd(a, b) is 1 (e.g.gcd(32, 15))There may be a lot of steps: gcd(21, 13)
Let’s try to identify the worst case
The maximum number of iterations is achieved if theremainder is close to d
Turns out that such worst case is when we want to computegcd(fn+1, fn), where fn is the n − th element of Fibonacci’ssequence!
Complexity analysis Recursions Solving recursion without MT
Greatest Common Divisor
Worst Case Complexity
Indeed, fn+1 < 2fn, thus r = fn+1 − fn, which is fn−1, that is aFibonacci number itself, which is quite big
Most importantly, the next step of the algorithm will computegcd(fn, fn−1), which still exhibit the same behavior
Therefore, we have a sequence of r values, which are theFibonacci’s sums, starting from fn−1 = a− b to f1 = 1
Thus, we have O(n) iterations, but what is n?
hint: fn ≈ φn, where φ = 1+√5
2 , the golden ratio!
Indeed, it is easy to prove it by induction: fn = fn−1 + fn−2 . . .
Since, b = fn, then n = logφ(b)
In conclusion, worst case complexity is O(n) = O(log(b))