What is an algorithm? 1.Algorithms are the ideas behind computer programs. 2.An algorithm is the...

54
What is an algorithm? 1. Algorithms are the ideas behind computer programs. 2. An algorithm is the thing which stays the same whether the program is in Pascal running on a Cray in New York or is in BASIC running on a PC in Taipei! 3. To be interesting, an algorithm has to solve a general, specified problem. An algorithmic problem is specified by describing the set of instances it must work on and what desired properties the output must have.

Transcript of What is an algorithm? 1.Algorithms are the ideas behind computer programs. 2.An algorithm is the...

What is an algorithm?1. Algorithms are the ideas behind computer programs.

2. An algorithm is the thing which stays the same whether the program is in Pascal running on a Cray in New York or is in BASIC running on a PC in Taipei!

3. To be interesting, an algorithm has to solve a general, specified problem. An algorithmic problem is specified by describing the set of instances it must work on and what desired properties the output must have.

What is an algorithm?• Problem: specified by its input/output behaviors,  • Algorithm: a problem-solving procedure which can be i

mplemented on a computer and satisfies the following conditions:

it terminates, it is correct, and it is well-defined• Program: implementation of an algorithm on a computer

• A problem will normally have many (usually infinitely many) instances.

• An algorithm must work correctly on every instance of the problem it claims to solve.

Example

Another Example• Suppose you have a robot arm equipped with a tool,

say a soldering iron. To enable the robot arm to do

a soldering job, we must construct an ordering of the contact points, so the robot visits (and solders) the first contact point, then visits the second point, third, and so forth until the job is done.

• Since robots are expensive, we need to find the order which minimizes the time (I.e., travel distance) it takes to assemble the circuit board.

Correctness is Not Obvious!

• You are given the job to program the robot arm. Give me an algorithm to find the best tour!

Nearest Neighbor Tour• A very popular solution starts at some point p0 and then

walks to its nearest neighbor p1 first, then repeats from p1, etc. until done.

Pick and visit an initial point p0

p = p0

i = 0While there are still unvisited points

i = i +1Let pi be the closest unvisited point to pi-1

Visit pi

Return to p0 from pi

It Does Not Solve The Problem!• This algorithm is simple to understand and implement

and very efficient. However, it is not correct!

• Always starting from the leftmost point or any other point will not solve the problem.

Closest Pair Tour

• Always walking to the closest point is too restrictive, since that point might trap us into making moves we don't want.

• Another idea would be to repeatedly connect the closest pair of points whose connection will not cause a cycle or a three-way branch to be formed, until we have a single chain with all the points in it.

It is Still Not Correct!Let n be the number of points in the setd = For i = 1 to n - 1 do

For each pair of endpoints (x; y) of partial pathsIf dist(x; y) < d then

xm = x, ym = y, d = dist(x; y)

Connect (xm; ym ) by an edge

Connect the two endpoints by an edge.• Although it works correctly on the previous example, oth

er data causes trouble:

A Correct Algorithm

• We could try all possible orderings of the points, then select the ordering which minimizes the total length:

d = For each of the n! permutations i of the n points

if (cost(i) < d) then

d = cost(i) and Pmin = i

Return Pmin

It is Not Efficient

• Since all possible orderings are considered, we are guaranteed to end up with the shortest possible tour.

• Because it tries all n! permutations, it is extremely slow, much too slow to use when there are more than 10-20 points.

• No efficient, correct algorithm exists for the traveling salesman problem, as we will see later.

Efficiency• “Why not just use a supercomputer?”

• Supercomputers are for people too rich and too stupid to design efficient algorithms!

• A faster algorithm running on a slower computer will always win for sufficiently large instances, as we shall see.

• Usually, problems don't have to get that large before the faster algorithm wins.

Expressing Algorithms• We need some way to express the sequence of steps com

prising an algorithm.

• In order of increasing precision, we have English, pseudocode, and real programming languages. Unfortunately, ease of expression moves in the reverse order.

• I prefer to describe the ideas of an algorithm in natural language, moving to pseudocode to clarify sufficiently tricky details of the algorithm.

Pseudocode notation• Similar to any typical imperative programming language,

such as Pascal, C, Modula, Java, ...• Liberal use of English.• Use of indentation for block structure.• Employs any clear and concise expressive methods.• Typically not concerned with software engineering issues

such as:– error handling.– data abstraction.– modularity.

Algorithm Analysis• Predicting the amount of resource required from the size

of the input. We must have some quantity to count. Typically:– runtime.– memory.– number of basic operations, such as:

• arithmetic operations (e.g., for multiplying matrices).• bit operations (e.g., for multiplying integers).• comparisons (e.g., for sorting and searching).

• Types of Analysis:– worst-case.– average-case.– best-case.

• Types of Analysis:– The worst case complexity of the algorithm is the

function defined by the maximum number of steps takes on any instance of size n.

– The best case complexity of the algorithm is the function defined by the minimum number of steps taken on any instance of size n.

– The average-case complexity of the algorithm is the function defined by an average number of steps taken on any instance of size n.

– Each of these complexities defines a numerical function: time v.s. size!

Best, Worst, and Average-Case

Best, Worst, and Average-Case

The RAM Model• Algorithms are the only important, durable, and original

part of computer science because they can be studied in a machine and language independent way.

• We will do most of our design and analysis for the RAM model of computation:

1. Each "simple" operation (+, -, =, if, call) takes exactly 1 step.

2. Loops and subroutine calls are not simple operations, but depend upon the size of the data and the contents of a subroutine. We do not want “sort" to be a single step operation.

3. Each memory access takes exactly 1 step.

Insertion Sort• One way to sort an array of n elements is to start with an

empty list, then successively insert new elements in the proper position:

a1 a2 ak ak+1 an

• At each stage, the inserted element leaves a sorted list, and after n insertions contains exactly the right elements. Thus the algorithm must be correct.

• But how efficient is it?• Note that the run time changes with the permutation insta

nce! (even for a fixed size problem)

Example

Exact Analysis of Insertion Sort• In the following table, n = length(A), and tj = number of t

imes the while test in line 5 is executed in the jth iteration.

Line InsertionSort(A) #Inst. #Exec.

1

2

3

4

5

6

7

8

for j:=2 to n do

key:=A[j]

/* put A[j] into A[1..j-1] */

i:=j-1

while i>0 & A[i] > key do

A[i+1] :=A[i]

i:=i-1

A[i+1]:=key

c1

c2

0

c4

c5

c6

c7

c8

n (why?)

n-1

0

n-1

n-1

• Add up the executed instructions for all pseudo-code lines to get the run-time of the algorithm:

• What are the tj‘s?

They depend on the particular input.

The Total Cost

Best Case

• If it's already sorted, all tj‘s are 1.

• Hence, the best case time is

c1n + (c2 + c4 + c5 + c8)(n – 1) = Cn + D

where C and D are constants.

Worst Case• If the input is sorted in descending order, we will have to

slide all of the already-sorted elements, so tj = j, and step 5 is executed

• Total runtime is

Average Case

• Exact Analysis is difficult to work with because it is typically very complicated!

• Thus it is usually cleaner and easier to talk about upper and lower bounds of the function.

Exact Analysis is Hard!

Life Cycle of Algorithm Development Problem Interpretation

Formal Description

Modeling and Analysis of the Problem

Algorithm Analysis

Algorithm Design

Algorithm Implementation

Program Verification

Specification

appropriate assumptionsAbstraction

Problem representation, properties of the problem

Ideas, Data structures

Not O.K.

DONE

O.K.

Partial correctness, Termination

Faithful coding

complexity; correctness

design techniques

Satisfied

Not satisfied

Related Courses

• Formal Specification

• Abstract computation models

• Data structure design

• Algorithm design techniques

• Theory of Computation / Complexity

• Software engineering

• Program verification

• Computability

Classification of Algorithms

• by methods (techniques)

• by characteristics

• by running environments (architectures)

Classified by methods (techniques)

• Divide and Conquer• Dynamic Programming• Greedy• Network Flow• Linear/Integer Programming• Backtracking• Branch and Bound• • • •

Classified by characteristics

• Heuristic• Approximation• Randomized (Probabilistic)• On-Line• Genetic• • • •

Classified by running environments

• Sequential• Parallel• Distributed• Systolic• • • •

Asymptotic NotationsSuppose f and g are functions defined on positive integers:

• Asymptotic upper bound:

O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n > n0 }.

• Asymptotic lower bound:

(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n > n0}.

• Asymptotic tight bound:

(g(n)) = {f(n): there exist positive constants c1, c2, and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all

n > n0}.

Useful (abuse of) notation• Write f(n) = O(g(n)) to mean f(n) O(g(n)).• Similarly for , and . Very useful, e.g.:

Big-O

Big-

f(n) = (g(n))

Big-

f(n) = (g(n))

, and

f(n) = (g(n)) f(n) = O(g(n)) f(n) = (g(n))

tight bound upper bound lower bound

Growth Rate of Functions

A Revealing Table

Another Revealing Table• If an algorithm runs in time T(n), what is the largest

problem instance that it can solve in 1 minute?

Another definition of  

• In using this notation, the left-hand side is more precise than the right, i.e., n2 = O(n4), 27n3 = (n3), (n) = O(n2), we do not say O(n2) = n2.

 

• Another definition of the big omega notation:

f(n) = (g(n)) iff there exists constant c > 0 and positive integer n0 such that f(n) ≥ c g(n),

for infinitely many n ≥ n0.

• Why define big-O and big-  notation in such an asymmetric why?

Example

g(n) = 2n

g(n) = n

f(n) = 0 if n is even f(n) = 1.5n if n is odd

• What is the asymptotic order of f(n)?

Clearly, f(n) is O(n) but not (n) and hence not (n).

• What is the lower bound of f(n)?

According to the original definition, the best lower bound is 0.

More Asymptotic Notations• Upper bound that is not asymptotically tight:

o(g(n)) = {f(n): for any c > 0, there exist positive constant n0 such that 0 ≤ f(n) < cg(n) for all n > n0 }.

 • Lower bound that is not asymptotically tight:

(g(n)) = {f(n): for any c >0, there exist positive constant n0 such that 0 ≤ cg(n) < f(n) for all n > n0}.

 • f(n)  O(g(n)) iff g(n)  (f(n)).• f(n)  o(g(n)) iff g(n)  (f(n)).

Comparison of functions

• Many of the relational properties of real numbers apply to asymptotic comparisons as well. For the following, assume that f(n) and g(n) are asymptotically positive.

• Transitivity:

1. f(n) = (g(n)) and g(n) = (h(n)) imply f(n) = (h(n)),

2. f(n) = O(g(n)) and g(n) = O(h(n)) imply f(n) = O(h(n)),

3. f(n) = (g(n)) and g(n) = (h(n)) imply f(n) = (h(n)),

4. f(n) = o(g(n)) and g(n) = o(h(n)) imply f(n) = o(h(n)),

5. f(n) = (g(n)) and g(n) = (h(n)) imply f(n) = (h(n)).

Relations• Reflexivity: 

f(n) = (f(n)), f (n) = O(f(n)), f(n) = (f(n)). 

• Symmetry:  

f (n) = (g(n)) if and only if g(n) = (f(n)).

• Transpose symmetry: 

f(n) = O(g(n)) iff g(n) = (f(n)),

f(n) = o(g(n)) iff g(n) = (f(n)).

Asymptotic vs. Real Numbers• Analogy between the asymptotic comparison of two

functions f and g and the comparison of two real numbers a and b:

 

f(n) = (g(n)) � a = bf(n) = O(g(n)) � a ≤ b f(n) = o(g(n)) a < b�f(n) = (g(n)) a ≥ b� f(n) = (g(n)) a > b.

• Any two real numbers can be compared, but not all functions are asymptotically comparable. That is, for two functions f(n) and g(n), it may be the case that neither f(n) = O(g(n)) nor f(n) = (g(n)) holds. E.g.: f(n) = n1+sin(n), and g(n) = n1-sin(n).

Properties of Asymptotic Notations 1. For all k > 0, kf is O(f). 2. If f is O(g) and f’ is O(g) then (f + f’) is O(g). If f is O(g) then (f + g) is O(g). 3. If f is O(g) and g is O(h) then f is O(h). 4. nr is O(ns) if 0 ≤ r ≤ s. 5. If p is a polynomial of degree d then p is (nd). 

6. If f is O(g) and h is O(r) then f h is O(g r).

 

7. nk is O(bn), for all b > 1, k 0.

 

8. logb n is O(nk), for all b > 1, k > 0.

 

9. logbn is (logdn), for all b, d > 1.

 

10. kr = O(nr+1).

More Properties

n

k 1

n

k 1

Intractability 

• Definition: An algorithm has polynomial time complexity iff its time complexity is O(nd) for some integer d. A problem is intractable iff no algorithm with polynomial time complexity is known for it.

Exercises: 

• Is 3n O(2n)?• What is O(n2 - n + 6) - O(n2 - 6)?• Find functions f and g such that (1) f is O(g), (2) f

is not (g), and (3) f(n) > g(n) for infinitely many n.

      (0.9n)   < (22001)   < ((log n)2000)    < (n0.0001)    < (n)    < (log n!)    = (log nn)    = (n2log log n)    = (n log n)    < (n1.5)    < (3log n)    < (n1999)    < (2log n log log n)    < (2n2) < (22n)

• Order the following functions by their growth rate from smallest to largest.

(a) log n! (b) log nn

(c) 2log n log log n

(d) 22n (e) 3log2n

(f) 22001 (g) 2n2 (h) n2log log n (i) n1999

(j) 0.9n (k) n(l) n log n (m) n0.0001

(n) (o) (log n)2000

Example

nn

Proof of log n! = (n log n)

• To show f(n) = (g(n)), we must show O and . Go back to the definition.

– Big O: Must show that log n! = O(n log n)

log n! < log nn = n log n, therefore log n! = O(n log n)

– Big : Must show that log n! = (n log n)

log n! > log (n/2)n/2 = n(log n/2)/2

> n(log n)/4 for n > 4

Thus, log n! = (n log n)

• Therefore, log n! = (n log n)