Dynamic programming

22
Dynamic programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated as recurrences with overlapping sub instances. Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems . Main idea: -set up a recurrence relating a solution to a larger instance to solutions of some smaller instances - solve smaller instances once -record solutions in a table -extract solution to the initial 1

Transcript of Dynamic programming

Page 1: Dynamic programming

1

Dynamic programming

Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated as recurrences with overlapping sub instances. Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems .Main idea:

- set up a recurrence relating a solution to a larger instance to solutions of some smaller instances

- solve smaller instances once- record solutions in a table - extract solution to the initial instance from that table

Page 2: Dynamic programming

2

• Applicable when sub problems are not independent

– Sub problems share sub problemsE.g.: Combinations:

A divide and conquer approach would repeatedly solve

the common sub problems

Dynamic programming solves every sub problem just

once and stores the answer in a table

nk

n-1k

n-1k-1= +

n1

nn =1=1

Page 3: Dynamic programming

3

• Used for optimization problems

A set of choices must be made to get an optimal

solution

Find a solution with the optimal value (minimum

or maximum)

There may be many solutions that lead to an

optimal value

Our goal: find an optimal solution

Page 4: Dynamic programming

4

Dynamic Programming Algorithm

1. Characterize the structure of an optimal

solution

2. Recursively define the value of an optimal

solution

3. Compute the value of an optimal solution in

a bottom-up fashion

4. Construct an optimal solution from

computed information (not always necessary)

Page 5: Dynamic programming

5

• An algorithm design method that can be used when the solution to a problem may be viewed as the result of a sequence of decision.

• Dynamic Programming algorithm store results, or solutions, for small subproblems and looks them up, rather than recomputing them, when it needs later to solve larger subproblems

• Typically applied to optimiation problems

Page 6: Dynamic programming

6

Principle of Optimalilty• An optimal sequence of decisions has the

property that whatever the initial state an decision are, the remaining decisions must constitute an optimal decision sequence with regard to the state resulting from the first decision.

• Essentially, this principles states that the optimal solution for a larger subproblem contains an optimal solution for a smaller subproblem.

Page 7: Dynamic programming

7

Dynamic Programming VS. Greedy Method

Greedy Method only one decision sequence is ever

genertated.

Dynamic Programming Many decision sequences may be

genertated.

Page 8: Dynamic programming

8

Dynamic Programming VS. Divide-and Conquer

Divide-and-Conquer partition the problem into independent subproblems,

solve the subproblems recursively, and then combine their solutions to solve the original problem

Dynamic Programming applicable when the subproblems are not

independent, that is, when subproblems share subsubproblems.

Solves every subsubproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time the subsubproblem is encountered

Page 9: Dynamic programming

9

Knapsack Problem

• In a knapsack problem or rucksack problem, we are given a set of items, where each item is specified by a size and a value . We are also given a size bound , the size of our knapsack.

Item # Size Value1 1 82 3 63 5 5

Page 10: Dynamic programming

10

The Knapsack Problem

• The famous knapsack problem:– A thief breaks into a museum. Fabulous paintings,

sculptures, and jewels are everywhere. The thief has a good eye for the value of these objects, and knows that each will fetch hundreds or thousands of dollars on the clandestine art collector’s market. But, the thief has only brought a single knapsack to the scene of the robbery, and can take away only what he can carry. What items should the thief take to maximize the haul?

Page 11: Dynamic programming

11

Knapsack Problem – in Short

• A thief considers taking W pounds of loot. The loot is in the form of n items, each with weight wi and value vi. Any amount of an item can be put in the knapsack as long as the weight limit W is not exceeding.

Page 12: Dynamic programming

12

Knapsack Problem – 2 types

1. 0-1 Knapsack Problem (Dynamic Programming Solution)

2. Fractional Knapsack Problem (Greedy Approach Solution)

Page 13: Dynamic programming

13

0-1 Knapsack problem

• Given a knapsack with maximum capacity W, and a set S consisting of n items

• Each item i has some weight wi and benefit

value bi (all wi , bi and W are integer values)

• Problem: How to pack the knapsack to achieve maximum total value of packed items?

• Solution: Dynamic Programming.

Page 14: Dynamic programming

14

Fractional Knapsack Problem

• Fractional Knapsack Problem: we are given n objects and a knapsack. Object i has a weight wi and the knapsack has a capacity m. If a fraction xi, 0<=xi <=1, of object i is placed into the knapsack, then a profit pi xi is earned. The objective is to obtain a filling of the knapsack that maximizes the total profit earned.

Page 15: Dynamic programming

15

Fractional Knapsack

ni1 1,x0 and

max

i

1

subject to

1

mxw

xpimize

ii

ni

i

ni

i

Greedy Algorithm gives the optimal solution for the Fractional Knapsack Problem.

Page 16: Dynamic programming

16

Fractional Knapsack-Greedy Solution• Greedy-fractional-knapsack (w, v, W)

1. for i =1 to n

2.do x[i] =0

3. weight = 0

4. while weight < W

5.do i = best remaining item

6.if weight + w[i] ≤ W

7.then x[i] = 1

8.weight = weight + w[i]

9.else

10.x[i] = (w - weight) / w[i]

11.weight = W

12. return x

Page 17: Dynamic programming

17

Analysis

• If the items are already sorted into decreasing order of vi / wi, then the while-loop takes a time in O(n);

• Therefore, the total time including the sort is in O(n log n).

• If we keep the items in heap with largest vi/wi at the root. Then creating the heap takes O(n) time

• while-loop now takes O(log n) time (since heap property must be restored after the removal of root)

Page 18: Dynamic programming

7 -18

The edit distance problem

• 3 edit operations: insertion, deletion, replacement• e.g string A=‘vintner’, string B=‘writers’

v intnerwri t ersRIMDMDMMI

M: match, I: insert, D:delete, R: replace

• The edit cost of each I, D, or R is 1.• The edit distance between A and B: 5.

Page 19: Dynamic programming

19

The edit distance between two strings

Seminar in Structural Bioinformatics - Pairwise sequence alignment algorithms

The permitted edit operations are: Insertion, Deletion, Replacement.

Definition: A string over the alphabet I,D,R,M that describes a transformation of one string to another is called edit transcript, or transcript for short, of the two strings.

I M M D M D M I R

r e n t n i v

s r e t i r w

Match

Page 20: Dynamic programming

20

The edit distance between two strings

Seminar in Structural Bioinformatics - Pairwise sequence alignment algorithms

Definition: The edit distance between two strings is defined as the minimum number of edit operations– insertion, deletion, and substitutions – needed to transform the first string into the second.

For emphasis, note that matches are not counted.

Page 21: Dynamic programming

21

String alignment

Seminar in Structural Bioinformatics - Pairwise sequence alignment algorithms

Definition: A (global) alignment of two strings S1 and S2, is obtained by first inserting chosen spaces (or dashes), either into or at the ends of S1 and S2, and then placing the two resulting strings one above the other so that every character or space in either string is opposite a unique character or a unique space in the other string.

Page 22: Dynamic programming

7 -22

The edit distance algorithm

• Let A = a1 a2 am and B = b1 b2 bn

• Let Di,j denote the edit distance of a1 a2 ai and b1

b2 bj.

Di,0 = i, 0 i m

D0,j = j, 0 j n

Di,j = min{Di-1, j + 1, Di, j-1 + 1, Di-1, j-1 + ti, j}, 1 i m, 1 j n

where ti, j = 0 if ai = bj and ti, j = 1 if ai bj.