Implementing Stacks Ellen Walker CPSC 201 Data Structures Hiram College.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
-
Upload
kristopher-short -
Category
Documents
-
view
212 -
download
0
Transcript of Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
![Page 1: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/1.jpg)
Search (continued)
CPSC 386 Artificial IntelligenceEllen WalkerHiram College
![Page 2: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/2.jpg)
Evaluating Search Strategies
• Completeness– Will a solution always be found if one exists?
• Optimality– Will the optimal (least cost) solution be found?
• Time Complexity– How long does it take to find the solution?– Often represented by # nodes expanded
• Space Complexity– How much memory is needed to perform the search? – Represented by max # nodes stored at once
![Page 3: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/3.jpg)
Comparison of Strategies
Breadth First
Uniform Cost
Depth First
Depth Limited
Iterative Deepening
Bidirec-tional
Complete?
Yes(finite)
Yes No No Yes Yes
Time O(bd+1) O(bC*/e) O(bm) O(bl) O(bd) O(bd/2)
Space O(bd+1) O(bC*/e) O(bm) O(bl) O(bl) O(bd/2)
Optimal?
Yes(all =)
Yes No No Yes(all =)
Yes(all =, bfs)
(Adapted from Figure 3.17, p. 81)
![Page 4: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/4.jpg)
Avoiding Repeated States
• All visited nodes must be saved to avoid looping– Closed list: all expanded nodes– Open list: fringe of unexpanded nodes
• If the current node matches a node on the closed list, it is discarded– In some algorithms, the new node might be better, and the old one discarded
![Page 5: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/5.jpg)
Partial Information
• Sensorless problems– Multiple initial states, multiple next states
– Consider all for a solution, e.g. table tipping
• Contingency problems– New information after each action – E.g. adversarial (multiplayer games)
![Page 6: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/6.jpg)
Informed Search Strategies
• Also called heuristic search• All are variations of best-first search– The next node to expand is the one “most likely” to lead to a solution
– Priority queue, like uniform cost search, but priority is based on additional knowledge of the problem
– The priority function for the priority queue is usually called f(n)
![Page 7: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/7.jpg)
Heuristic Function
• Heuristic, from Greek for “good”• Heuristic function, h(n) = estimated cost from the current state to the goal
• Therefore, our best estimate of total path cost is g(n) + h(n)– Recall, g(n) is cost from initial state to current state
– If we treat uniform cost search as a heuristic search, f(n) = g(n) and h(n) = 0
![Page 8: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/8.jpg)
Algorithms
• Greedy best-first search– Always expand the node closest to the goal – f(n) = h(n)
• A* Search– Expand the node with the best estimated total cost
– f(n) = g(n) + h(n)– The fun part is dealing with loops when a node being expanded matches a node on the open or closed list.
![Page 9: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/9.jpg)
Greedy Best-First Search
• Like depth-first search, it tends to want to stay on a path, once chosen
• Doesn’t like solutions that require you to move away from the goal to get there – Example: 8 puzzle
• Non-optimal• Worst-case exponential, but good heuristic function helps!
• Also called “hill climbing” (but traditional hill climbing doesn’t backtrack)
![Page 10: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/10.jpg)
A* Search Algorithm
– put initial state (only) on OPEN
– Until a goal node is found:
• If OPEN is empty, fail
• BESTNODE := best node on OPEN (lowest (g+h) value)
• if BESTNODE is a goal node, succeed
• move BESTNODE from OPEN to CLOSED
• generate successors of BESTNODE
![Page 11: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/11.jpg)
A* (cont)
– For each SUCCESSOR in successor list
set SUCCESSOR's parent to BESTNODE (back link for finding path later)
compute g(SUCCESSOR) = g(BESTNODE) + cost of link from BESTNODE to SUCCESSOR
if SUCCESSOR is the same as a node on OPEN
OLD := node on OPEN that is same as SUCCESSOR
add OLD to list of BESTNODE's successors
if g(SUCCESSOR) < g(OLD) (newly found path is cheaper)
reset OLD’s parent to BESTNODE
g(OLD) = g(SUCCESSOR)
![Page 12: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/12.jpg)
A* (cont)
else if SUCCESSOR is the same as a node on CLOSED
OLD := node on CLOSED that is same as SUCCESSOR
add OLD to list of BESTNODE's successors
if g(SUCCESSOR) < g(OLD) (newly found path is cheaper)
reset OLD's parent to BESTNODE
g(OLD) = g(SUCCESSOR)
Update g() of all successors of OLD
else add SUCCESSOR to list of BESTNODE's successors
![Page 13: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/13.jpg)
Updating successors of a closed node
if NODE is empty, return
For each successor of NODE
if the parent of the successor is NODE then
g(successor) = g(NODE) + path cost
update_successors(successor)
else if g(NODE) + path cost < g(successor) then
g(successor) = g(NODE) + path cost update_successors(successor)
![Page 14: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/14.jpg)
A* Example(h = true cost)
12
11
3
254
11 8
2
7
1
0
8
12
1 2
44
4
B C
A
D
E
FG
H
I
![Page 15: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/15.jpg)
A* Search Example(h underestimates true
cost)10
11
3
252
5 8
2
7
1
0
8
12
1 2
44
4
B C
A
D
E
FG
H
I
![Page 16: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/16.jpg)
Better h means better search
• When h = cost to the goal, – Only nodes on correct path are expanded
– Optimal solution is found
• When h < cost to the goal,– Additional nodes are expanded– Optimal solution is found
• When h > cost to the goal– Optimal solution can be overlooked
![Page 17: Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.](https://reader036.fdocuments.us/reader036/viewer/2022083009/5697bfbb1a28abf838ca101e/html5/thumbnails/17.jpg)
Comments on A* search
• A* is optimal if h(n) is admissable, -- if it never overestimates distance to goal
• We don’t need all the bookkeeping if h(n) is consistent -- if the distance of a successor node is never more than the distance of the predecessor – the cost of the action.
• If g(n) = 0, A* = greedy best-first search• If g(n) = k, h(n)=0, A* = breadth-first search