CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A *...

51
CHAPTER 4 Informed search algorithms 1

Transcript of CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A *...

Page 1: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

1

CHAPTER 4

Informed search algorithms

Page 2: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

2

Outline

Best-first searchGreedy best-first searchA* searchHeuristicsLocal search algorithmsHill-climbing searchSimulated annealing searchLocal beam searchGenetic algorithms

Page 3: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

3

Review: Tree search

A search strategy is defined by picking the order of node expansion

Page 4: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

4

Best-first search

Idea: use an evaluation function f(n) for each node estimate of "desirability“ Expand most desirable unexpanded node

Implementation:Order the nodes in fringe in decreasing order of desirability

Special cases: greedy best-first search A* search

Page 5: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

5

Romania with step costs in km

Page 6: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

6

Greedy best-first search

Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goale.g., hSLD(n) = straight-line distance from n to Bucharest

Greedy best-first search expands the node that appears to be closest to goal

Page 7: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

7

Greedy best-first search example

Page 8: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

8

Greedy best-first search example

Page 9: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

9

Greedy best-first search example

Page 10: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

10

Greedy best-first search example

Page 11: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

11

Properties of greedy best-first search

Complete? No – can get stuck in loops, e.g., Iasi Neamt Iasi Neamt

Time? O(bm), but a good heuristic can give dramatic improvement

Space? O(bm) -- keeps all nodes in memoryOptimal? No

Page 12: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

12

A* search

Idea: avoid expanding paths that are already expensive

Evaluation function f(n) = g(n) + h(n)g(n) = cost so far to reach nh(n) = estimated cost from n to goalf(n) = estimated total cost of path through n

to goal

Page 13: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

13

A* search example

Page 14: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

14

A* search example

Page 15: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

15

A* search example

Page 16: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

16

A* search example

Page 17: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

17

A* search example

Page 18: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

18

A* search example

Page 19: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

19

Admissible heuristics

A heuristic h(n) is admissible if for every node n,h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n.

An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic

Example: hSLD(n) (never overestimates the actual road distance)

Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal

Page 20: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

20

Optimality of A* (proof)

Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G.

f(G2) = g(G2) since h(G2) = 0 g(G2) > g(G) since G2 is suboptimal f(G) = g(G) since h(G) = 0 f(G2) > f(G) from above

Page 21: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

21

Optimality of A* (proof)

Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G.

f(G2) > f(G) from above h(n) ≤ h^*(n) since h is admissible g(n) + h(n) ≤ g(n) + h*(n) f(n) ≤ f(G) Hence f(G2) > f(n), and A* will never select G2 for expansion

Page 22: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

22

Consistent heuristics

A heuristic is consistent if for every node n, every successor n' of n generated by any action a,

h(n) ≤ c(n,a,n') + h(n')

If h is consistent, we havef(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n)

i.e., f(n) is non-decreasing along any path. Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is

optimal

Page 23: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

23

Optimality of A*

A* expands nodes in order of increasing f value Gradually adds "f-contours" of nodes Contour i has all nodes with f=fi, where fi < fi+1

Page 24: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

24

Properties of A*

Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) )

Time? ExponentialSpace? Keeps all nodes in memoryOptimal? Yes

Page 25: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

Heuristic Functions

Page 26: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

Heuristics and A* Search

The heuristic function h(n) tells A* an estimate of the minimum cost from any vertex n to the goal.

The heuristic can be used to control A*’s behavior. If h(n) is 0, then only g(n) plays a role, and A* turns into

Dijkstra’s algorithm If h(n) is always lower than (or equal to) the cost of moving

from n to the goal, then A* is guaranteed to find a shortest path.

If h(n) is exactly equal to the cost of moving from n to the goal, then A* will only follow the best path and never expand anything else, making it very fast.

Page 27: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

Heuristics and A* Search

The heuristic can be used to control A*’s behavior. If h(n) is sometimes greater than the cost of moving

from n to the goal, then A* is not guaranteed to find a shortest path, but it can run faster.

At the other extreme, if h(n) is very high relative to g(n), then only h(n) plays a role, and A* turns into Greedy Best-First-Search.

Page 28: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

Heuristics and A* Search

Suppose your game has two types of terrain, Flat and Mountain

Movement costs are 1 for flat land and 3 for mountains

A* is going to search three times as far along flat land as it does along mountainous land

You can speed up A*’s search by using 1.5 as the heuristic distance between two map spaces.

Alternatively, you can speed up up A*’s search by decreasing the amount it searches for paths around mountains―just tell A* that the movement cost on mountains is 2 instead of 3.

Page 29: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

Heuristics

Heuristic (/hjʉˈrɪstɨk/; Greek: "Εὑρίσκω", "find" or "discover") refers to experience-based techniques for problem solving, learning, and discovery that find a solution, which is not guaranteed to be optimal, but good enough for a given set of goals. (Wikipedia)

Trial and Error

Page 30: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

Polya – How to solve it

If you are having difficulty understanding a problem, try drawing a picture.

If you can't find a solution, try assuming that you have a solution and seeing what you can derive from that ("working backward").

If the problem is abstract, try examining a concrete example.

Try solving a more general problem first (the "inventor's paradox": the more ambitious plan may have more chances of success).

Page 31: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

31

Admissible heuristics

E.g., for the 8-puzzle:

h1(n) = number of misplaced tiles h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile)

h1(S) = ? h2(S) = ?

Page 32: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

32

8-puzzle

http://mypuzzle.org/sliding

Page 33: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

33

Admissible heuristics

E.g., for the 8-puzzle:

h1(n) = number of misplaced tiles h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile)

h1(S) = ? 8h2(S) = ? 3+1+2+2+2+3+3+2 = 18

Page 34: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

34

Dominance

If h2(n) ≥ h1(n) for all n (both admissible) then h2 dominates h1

h2 is better for search

Typical search costs (average number of nodes expanded): d=12 IDS = 3,644,035 nodes

A*(h1) = 227 nodes A*(h2) = 73 nodes

d=24 IDS = too many nodesA*(h1) = 39,135 nodes A*(h2) = 1,641 nodes

Page 35: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

35

Relaxed problems

A problem with fewer restrictions on the actions is called a relaxed problem

The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem

If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution

If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution

Page 36: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

36

Local search algorithms

In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution

State space = set of "complete" configurations

Find configuration satisfying constraints, e.g., n-queens

In such cases, we can use local search algorithms keep a single "current" state, try to improve it

Page 37: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

37

Example: n-queens

Put n queens on an n × n board with no two queens on the same row, column, or diagonal

Page 38: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

38

Hill-climbing search

"Like climbing Everest in thick fog with amnesia"

Page 39: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

39

Hill-climbing search

Problem: depending on initial state, can get stuck in local maxima

Page 40: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

40

Hill-climbing search: 8-queens problem

h = number of pairs of queens that are attacking each other, either directly or indirectly

h = 17 for the above state

Page 41: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

41

Hill-climbing search: 8-queens problem

• A local minimum with h = 1

Page 42: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

42

Simulated annealing search

Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency

Page 43: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

43

Properties of simulated annealing search

One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1

Widely used in VLSI layout, airline scheduling, etc

Page 44: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

44

Local beam search

Keep track of k states rather than just one

Start with k randomly generated states

At each iteration, all the successors of all k states are generated

If any one is a goal state, stop; else select the k best successors from the complete list and repeat.

Page 45: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

45

Genetic algorithms

A successor state is generated by combining two parent states

Start with k randomly generated states (population)

A state is represented as a string over a finite alphabet (often a string of 0s and 1s)

Evaluation function (fitness function). Higher values for better states.

Produce the next generation of states by selection, crossover, and mutation

Page 46: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

46

Genetic algorithms

Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28)

24/(24+23+20+11) = 31%23/(24+23+20+11) = 29% etc

Page 47: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

47

Genetic algorithms

Page 48: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

Online Searching

An online search agent operates by interleaving computation and action. First, it takes action, then it observes the environment and computes the next action.

However, there is a penalty for sitting around and computing too long. This is why we need efficient code.

Usually used in exploration situations.

Page 49: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

Offline Search vs. Online Search

Offline Search Knows the “map” of the situation Basically finds the shortest path knowing the whole

layout of the situation Works like a GPS navigation system

Online Search: Doesn’t know the “map” of the situation Has to explore and find out where to go, then

determine the shortest path

Page 50: CHAPTER 4 Informed search algorithms 1. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing.

Search Patterns

Often, online search agents search using a depth-first search pattern

The search pattern must include whether or not the state space is safely explorable.

Random walk method works, but not very well. It takes exponentially many steps to find the goal.

Hill climbing search is by default an online search method, but only keeps one state in memory and can’t go back. Depth-first is more efficient.