Uniform Cost Search

38
A B C D G 9 1 1 1 2 Uniform Cost Search No:A (0) N1:B(1) N2:G(9) N3:C(2) N4:D(3) N5:G(5) Completeness? Optimality? if d < d’, then paths with d distance explored before those with d’ Branch & Bound argument (as long as all op costs are +ve) A B C D G 9 0.1 0.1 0.1 25 Bait & Switch Graph tion: n,n’) cost of the edge between n and n’ n) distance of n from root st(n,n’’) shortest distance between n and n’’ If the graph is undirected?

description

A. A. 1. 0.1. Bait & Switch Graph. N1:B(1). N2:G(9). B. B. 1. 0.1. 9. 9. N3:C(2). C. C. 1. 0.1. D. N4:D(3). D. 2. 25. G. G. N5:G(5). Uniform Cost Search. No:A (0). Completeness? Optimality? if d < d’, then paths with d distance explored before those - PowerPoint PPT Presentation

Transcript of Uniform Cost Search

Page 1: Uniform Cost Search

A

B

C

D

G

9

1

1

1

2

Uniform Cost Search

No:A (0)

N1:B(1) N2:G(9)

N3:C(2)

N4:D(3)

N5:G(5)

Completeness?Optimality? if d < d’, then paths with d distance explored before those with d’ Branch & Bound argument (as long as all op costs are +ve)Efficiency? (as bad as blind search..)

A

B

C

D

G

9

0.1

0.1

0.1

25

Bait &SwitchGraph

Notation: C(n,n’) cost of the edge between n and n’ g(n) distance of n from root dist(n,n’’) shortest distance between n and n’’

If the graph is undirected?

Page 2: Uniform Cost Search

A

B

C

D

G

9

1

1

1

2

Uniform Cost Search on Undirected Graph

No:A (0)

N1:B(1) N2:G(9)

N31:C(2)

N41:D(3)

N51:G(5)

Notation: C(n,n’) cost of the edge between n and n’ g(n) distance of n from root dist(n,n’’) shortest distance between n and n’’

N32:A(2)

N42:B(3)

N52:C(3)

Cycles do not affect the completeness and optimality!

The same world state may come into the search queue multiple times (e.g. N0:A; N32:A), but each additional time it comes it comes with a higher g-value!

If edge costs are positive then no single node can be expanded more than a finite times..

We will eventually get out of the cycles and find the optimal solution

You can reduce the number of Repeated expansions by doing cycle checking

Page 3: Uniform Cost Search

Visualizing Breadth-First & Uniform Cost Search

Breadth-First goes level by level

This is also a proof of optimality…

Page 4: Uniform Cost Search

Proof of Optimality of Uniform searchProof of optimality: Let N be the goal node we output.Suppose there is another goal node N’We want to prove that g(N’) >= g(N)Suppose this is not true. i.e. g(N’) < g(N) --Assumption A1

When N was picked up for expansion,Either N’ itself, or some ancestor of N’,Say N’’ must have been on the search queue

If we picked N instead of N’’ for expansion,It was because

g(N) <= g(N’’) ---Fact f1

But g(N’) = g(N’’) + dist(N’’,N’)So g(N’) >= g(N’’)So from f1, we have g(N) <= g(N’) But this contradicts our assumption A1

No

N N’

N’’

Holds only because dist(N’’,N’) >= 0 This will hold if every operator has +ve cost

The partial path to N’ through N’’ is already longer than the path to N.

Page 5: Uniform Cost Search

“Informing” Uniform search…A

B

C

D

G

9

0.1

0.1

0.1

25

Bait &SwitchGraph

No:A (0)

N1:B(.1) N2:G(9)

N3:C(.2)

N4:D(.3)

N5:G(25.3)

Would be nice if we could tell thatN2 is better than N1 --Need to take not just the distance until now, but also distance to goal --Computing true distance to goal is as hard as the full search --So, try “bounds” h(n) prioritize nodes in terms of f(n) = g(n) +h(n) two bounds: h1(n) <= h*(n) <= h2(n) Which guarantees optimality?--h1(n) <= h2(n) <= h*(n) Which is better function?

Admissibility

Informedness

Page 6: Uniform Cost Search

A*

Page 7: Uniform Cost Search

A*(if there are multiple goal nodes, we consider the distance to the nearest goal node)

Several proofs: 1. Based on Branch and bound --g(N) is better than f(N’’) and f(n’’) <= cost of best path through N’’ 2. Based on contours -- f() contours are more goal directed than g() contours 3. Based on contradiction

No

N

N’’

Page 8: Uniform Cost Search

Where do heuristics (bounds) come from?From relaxed problems (the more relaxed, the easier to compute heuristic, but the less accurate it is)

For path planning on the plane (with obstacles)?

For 8-puzzle problem?

For Traveling sales person?

Assume away obstacles. The distance will then beThe straightline distance (see next slide for other abstractions)

Assume ability to move the tile directly to the place distance= # misplaced tilesAssume ability to move only one position at a time distance = Sum of manhattan distances.

Relax the “circuit” requirement. Minimum spanning tree

Page 9: Uniform Cost Search

Different levels of abstraction for shortest path problems on the plane

I

G

I

G“circular abstraction”

I

G“Polygonal abstraction”

I

G

“disappearing-act abstraction”

hD

hC

hP

h*

The obstacles in the shortest path problem canbe abstracted in a variety of ways. --The more the abstraction, the cheaper it is to solve the problem in abstract space --The less the abstraction, the more “informed” the heuristic cost (i.e., the closer the abstract path length to actual path length)

ActualWhy are we inscribing the obstacles rather than circumscribing them?

Page 10: Uniform Cost Search

hDhC hP

h*h0

Cost of computing the heuristic

Cost of searching with the heuristic

Total cost incurred in search

Not always clear where the total minimum occurs• Old wisdom was that the global min was

closer to cheaper heuristics• Current insights are that it may well be far

from the cheaper heuristics for many problems• E.g. Pattern databases for 8-puzzle • polygonal abstractions for SP• Plan graph heuristics for planning

How informed should the heuristic be?

I

G

I

G“circular abstraction”

I

G“Polygonal abstraction”

I

GhD

hC

hP

h*Actual

Reduced level of abstraction (i.e. more and more concrete)

Page 11: Uniform Cost Search
Page 12: Uniform Cost Search

h*

h1

h4

h5

Admissibility/Informedness

h2h3

Max(h2,h3)

Seach Nodes

Heu

ristic

Val

ue

Page 13: Uniform Cost Search

Performance on 15 Puzzle

• Random 15 puzzle instances were first solved optimally using IDA* with Manhattan distance heuristic (Korf, 1985).

• Optimal solution lengths average 53 moves.• 400 million nodes generated on average.• Average solution time is about 50 seconds

on current machines.

Page 14: Uniform Cost Search

Limitation of Manhattan Distance

• To solve a 24-Puzzle instance, IDA* with Manhattan distance would take about 65,000 years on average.

• Assumes that each tile moves independently

• In fact, tiles interfere with each other.• Accounting for these interactions is the key

to more accurate heuristic functions.

Page 15: Uniform Cost Search

More Complex Tile Interactions3711

12 13 14 15

14 73

15 1211 13

Min disance (M.d.) is 19 moves, but 31

moves are needed.

M.d. is 20 moves, but 28 moves are needed

3711

12 13 14 15

7 1312

15 311 14

M.d. is 17 moves, but 27 moves are needed

3711

12 13 14 15

12 117 14

13 315

Page 16: Uniform Cost Search

Pattern Database Heuristics

• Culberson and Schaeffer, 1996• A pattern database is a complete set of such

positions, with associated number of moves.• e.g. a 7-tile pattern database for the Fifteen

Puzzle contains 519 million entries.

Page 17: Uniform Cost Search

Heuristics from Pattern Databases

1 2 3

4 5 6 7

8 9 10 11

12 13 14 15

5 10 14 7

8 3 6 1

15 12 9

2 11 4 13

31 moves is a lower bound on the total number of moves needed to solve this particular state.

Page 18: Uniform Cost Search

Precomputing Pattern Databases

• Entire database is computed with one backward breadth-first search from goal.

• All non-pattern tiles are indistinguishable, but all tile moves are counted.

• The first time each state is encountered, the total number of moves made so far is stored.

• Once computed, the same table is used for all problems with the same goal state.

Page 19: Uniform Cost Search

Proof of Optimality of A* searchProof of optimality: Let N be the goal node we output.Suppose there is another goal node N’We want to prove that g(N’) >= g(N)Suppose this is not true. i.e. g(N’) < g(N) --Assumption A1

When N was picked up for expansion,Either N’ itself, or some ancestor of N’,Say N’’ must have been on the search queue

If we picked N instead of N’’ for expansion,It was because

f(N) <= f(N’’) ---Fact f1i.e. g(N) + h(N) <= g(N’’) + h(N’’) Since N is goal node, h(N) = 0So, g(N) <= g(N’’) + h(N’’)

But g(N’) = g(N’’) + dist(N’’,N’)Given h(N’) <= h*(N’’) = dist(N’’,N’) (lower bound)So g(N’) = g(N’’)+dist(N’’,N’) >= g(N’’) +h(N’’) ==Fact f2So from f1 and f2 we have g(N) <= g(N’) But this contradicts our assumption A1

No

N N’

N’’

Holds only because h(N’’) is a lower bound on dist(N’’,N’)

The lower-bound (optimistic) estimate on the length of the path to N’ through N’’ is already longer than the path to N.

f(n) is the estimate of the length of the shortest path to goal passing through n

Page 20: Uniform Cost Search

It will not expandNodes with f >f*(f* is f-value of theOptimal goal whichis the same as g* sinceh value is zero for goals)Uniform

cost search

A*

Visualizing A* Search

But, does f value really increase along a path?

Page 21: Uniform Cost Search

A

B

C

D

G

9

.1

.1

.1

25

A* Search—Admissibility, Monotonicity, Pathmax Correction

No:A (0)

N1:B(.1+8.8) N2:G(9+0)

N3:C(max(.2+0),8.8)

N4:D(.3+25)

7

20

0

28

25

7

8.8

0

0

25

9

25.2

0

25.1

25

No:A (0)

N1:B(.1+25.2)N2:G(9+0)

f(B)= .1+8.8 = 8.9f(C)= .2+0 = 0.2 This doesn’t make sense since we are reducing the estimate of the actual cost of the path A—B—C—D—G To make f(.) monotonic along a path, we say f(n) = max( f(parent), g(n)+h(n))

PathMax Adjustment

This is just enforcingTriangle law of inequalityThat the sum of two sides

Must be greater than the thirdB

C

Gf(C)0.2

f(B)8.9

C(B,

C)0.

1

Is magenta-h admissible?

Is green-h admissible? Does f(c) make sense?

Why wasn’t this needed for uniform cost search?

Page 22: Uniform Cost Search
Page 23: Uniform Cost Search

A

B

C

D

G

9

.1

.1

.1

25

A* Search—Admissibility, Monotonicity, Pathmax Correction

No:A (0)

N1:B(.1+8.8) N2:G(9+0)

N3:C(max(.2+0),8.8)

N4:D(.3+25)

7

20

0

28

25

7

8.8

0

0

25

9

25.2

0

25.1

25

No:A (0)

N1:B(.1+25.2)N2:G(9+0)

f(B)= .1+8.8 = 8.9f(C)= .2+0 = 0.2 This doesn’t make sense since we are reducing the estimate of the actual cost of the path A—B—C—D—G To make f(.) monotonic along a path, we say f(n) = max( f(parent), g(n)+h(n))

PathMax Adjustment

Consistency of a heuristic will be enforced if it obeys triangle inequality

B

C

Gh(C)0.1

h(B)8.8

C(B,

C)0.

1

Is magenta-h admissible?

Is green-h admissible? Does f(c) make sense?

Why wasn’t this needed for uniform cost search?

If a heuristic is inconsistently optimistic, then it can be more optimistic at a node than its parent node. When this happens, f values of the expanded nodes are not guaranteed to increase monotonically.

Page 24: Uniform Cost Search

IDA*--do iterativedepth first search but Set threshold in terms off (not depth)

dh

hh

B

)*(

Page 25: Uniform Cost Search

IDA* to handle the A* memory problem

• Basicaly IDDFS, except instead of the iterations being defined in terms of depth, we define it in terms of f-value– Start with the f cutoff equal to the f-value of the

root node– Loop

• Generate and search all nodes whose f-values are less than or equal to current cutoff.

– Use depth-first search to search the trees in the individual iterations

– Keep track of the node N’ which has the smallest f-value that is still larger than the current cutoff. Let this f-value be next-largest-f-value

-- If the search finds a goal node, terminate. If not, set cutoff = next-largest-f-value and go back to Loop

Properties: Linear memory. #Iterations in the worst case? =

Bd !! (Happens when all nodes have distinct f-values. There is such a thing as too much discrimination…)

Page 26: Uniform Cost Search

Using memory more effectively: SMA*• A* can take exponential space in the worst case

• IDA* takes linear space (in solution depth) always• If A* is consuming too much space, one can argue that IDA*

is consuming too little• Better idea is to use all the memory that is available, and start

cleaning up as memory starts filling up– Idea: When the memory is about to fill up, remove the leaf node with

the worst f-value from the search tree• But remember its f-value at its parent (which is still in the search tree)

– Since the parent is now the leaf node, it too can get removed to make space• If ever the rest of the tree starts looking less promising than the parent of

the removed node, the parent will be picked up and expanded again. – Works quite well—but can thrash when memory is too low

• Not unlike your computer with too little RAM..

Page 27: Uniform Cost Search
Page 28: Uniform Cost Search

Nothing beyond this was discussed in Spring 2012

Page 29: Uniform Cost Search

Used while discussing A* alg

Page 30: Uniform Cost Search

On “predicting” the effectiveness of Heuristics

• Unfortunately, it is not the case that a heuristic h1 that is more informed than h2 will always do fewer node expansions than h2.

-We can only gurantee that h1 will expand less nodes with f-value less than f* than h2

will

• Consider the plot on the right… do you think h1 or h2 is likely to do better in actual search?– The “differentiation” ability of the

heuristic—I.e., the ability to tell good nodes from the bad ones-- is also important. But it is harder to measure.

• Some new work that does a histogram characterization of the distribution of heuristic values [Korf, 2000]

• Nevertheless, informedness of heuristics is a reasonable qualitative measure Nodes

Heu

ristic

val

ue

h1

h2

h*

Let us divide the number of nodes expanded nE intoTwo parts: nI which is the number of nodes expandedWhose f-values were strictly less than f* (I.e. the Cost of the optimal goal), and nG is the # of expandedNodes with f-value greater than f*. So, nE=nI+nG

A more informed heuristic is only guaranteed to haveA smaller nI—all bets are off as far as the nG value isConcerned. In many cases nG may be relatively largeCompared to nI making the nE wind up being higher For an informed heuristic!

Is h1 better or h2?

Page 31: Uniform Cost Search

Not enough to show the correct configuration of the 18-puzzle problem or rubik’s cube.. (although by including the list of actions as part of the state, you can support hill-climbing)

Page 32: Uniform Cost Search
Page 33: Uniform Cost Search
Page 34: Uniform Cost Search
Page 35: Uniform Cost Search

What is needed: --A neighborhood function The larger the neighborhood you consider, the less myopic the search (but the more costly each iteration) --A “goodness” function needs to give a value to non-solution configurations too for 8 queens: (-ve) of number of pair-wise conflicts

Page 36: Uniform Cost Search

Problematic scenarios for hill-climbing

When the state-space landscape has local minima, any search that moves only in the greedy direction cannot be (asymptotically) complete

Random walk, on the other hand, is asymptotically complete

Idea: Put random walk into greedy hill-climbing

Ridges

Solution(s): Random restart hill-climbing Do the non-greedy thing with some probability p>0 Use simulated annealing

Page 37: Uniform Cost Search

Making Hill-Climbing Asymptotically Complete

• Random restart hill-climbing– Keep some bound B. When you made more than B moves, reset

the search with a new random initial seed. Start again. • Getting random new seed in an implicit search space is non-trivial!

– In 8-puzzle, if you generate a random state by making random moves from current state, you are still not truly random (as you will continue to be in one of the two components)

• “biased random walk”: Avoid being greedy when choosing the seed for next iteration – With probability p, choose the best child; but with probability (1-

p) choose one of the children randomly• Use simulated annealing

– Similar to the previous idea—the probability p itself is increased asymptotically to one (so you are more likely to tolerate a non-greedy move in the beginning than towards the end)

With random restart or the biased random walk strategies, we can solve very large problems million queen problems in under minutes!

Page 38: Uniform Cost Search