Post on 19-Dec-2015
8/29
Administrative..
• Bouncing mails– Qle01; jmussem; rbalakr2
• Send me a working email address for class list
• Blog posting issues
• Recitation session for Homework 1?– Mail sent by Will Cushing (respond to him)– Show of hands…
Representation Mechanisms: Logic (propositional; first order) Probabilistic logic
Learning the models
Search Blind, InformedPlanning Inference Logical resolution Bayesian inference
How the course topics stack up…
Problem Solving Agents (Search-based Agents)
The important difference from the graph-search scenario you learned in CSE 310 is that you want to keep the graph implicit rather than explicit (i.e., generate only that part of the graph that is absolutely needed to get the optimal path) VERY important since for most problems, the graphs are humongous..
What happens w
hen the domain
Is inaccessib
le?
Given a state space of size n the single-state problem searches for a path in the graph of size n the multiple-state problem searches for a path in a graph of size 2n
the contingency problem searches for a sub-graph in a graph of size 2n
Utility of eyes (sensors) is reflected in the size of the
effective search space!
In general, a subgraph rather than a tree (loops may be needed consider closing a faulty door )
2n is the EVILthat every CS student’s nightmares are made of
Search in Multi-state(inaccessible)version
Set of states isCalled a “Belief State” So we are searching in the space of belief states
You search in thisSpace even if yourInit state is knownBut actions areNon-deterministic
Sensing reducesState Uncertainty
??
General S
earch
All search algorithms must do goal-test only when the node is picked up for expansion
Search algorithms differ based on the specific queuing function they use
Breadth first search on a uniform tree of b=10 Assume 1000nodes expanded/sec 100bytes/node
Qn: Is there a way of getting linear memory search that is complete and optimal?
The search is “complete” now (since there is finite space to be explored). But still inoptimal.
8/31
All search algorithms must do goal-test only when the node is picked up for expansion
Search algorithms differ based on the specific queuing function they use
We typically analyze properties of search algorithms on uniform trees --with uniform branching factor b and goal depth d (tree itself may go to depth dt )
Num iterations: (d+1)
Asymptotic ratio of # nodes expanded by IDDFS vs DFS
(b+1)/ (b-1) (approximates to 1 when b is large)
IDDFS: Review
A
B
C
D
G
DFS:
BFS:
IDDFS:
A,B,GA,B,C,D,G(A), (A, B, G)
Note that IDDFS can do fewer Expansions than DFS on a graphShaped search space.
A
B
C
D
G
DFS:
BFS:
IDDFS:
A,B,GA,B,A,B,A,B,A,B,A,B(A), (A, B, G)
Note that IDDFS can do fewer Expansions than DFS on a graphShaped search space.
Search on undirected graphs or directed graphs with cycles… Cycles galore…
Graph (instead of tree) Search: Handling repeated nodes
Main points: --repeated expansions is a bigger issue for DFS than for BFS or IDDFS --Trying to remember all previously expanded nodes and comparing the new nodes with them is infeasible --Space becomes exponential --duplicate checking can also be exponential--Partial reduction in repeated expansion can be done by --Checking to see if any children of a node n have the same state as the parent of n -- Checking to see if any children of a node n have the same state as any ancestor of n (at most d ancestors for n—where d is the depth of n)
A
B
C
D
G
9
1
1
1
2
Uniform Cost Search
No:A (0)
N1:B(1) N2:G(9)
N3:C(2)
N4:D(3)
N5:G(5)
Completeness?Optimality? if d < d’, then paths with d distance explored before those with d’
Branch & Bound argument (as long as all op costs are +ve)
Efficiency? (as bad as blind search..)
A
B
C
D
G
9
0.1
0.1
0.1
25
Bait &SwitchGraph
Notation: C(n,n’) cost of the edge between n and n’ g(n) distance of n from root dist(n,n’’) shortest distance between n and n’’
Visualizing Breadth-First & Uniform Cost Search
Breadth-First goes level by level
This is also a proof of optimality…
Proof of Optimality of Uniform search
Proof of optimality: Let N be the goal node we output.Suppose there is another goal node N’We want to prove that g(N’) >= g(N)Suppose this is not true. i.e. g(N’) < g(N) --Assumption A1
When N was picked up for expansion,Either N’ itself, or some ancestor of N’,Say N’’ must have been on the search queue
If we picked N instead of N’’ for expansion,It was because
g(N) <= g(N’’) ---Fact f1
But g(N’) = g(N’’) + dist(N’’,N’)So g(N’) >= g(N’’)So from f1, we have g(N) <= g(N’) But this contradicts our assumption A1
No
N N’
N’’
Holds only because dist(N’’,N’) >= 0 This will hold if every operator has +ve cost
“Informing” Uniform search…
A
B
C
D
G
9
0.1
0.1
0.1
25
Bait &SwitchGraph
No:A (0)
N1:B(.1) N2:G(9)
N3:C(.2)
N4:D(.3)
N5:G(25.3)
Would be nice if we could tell thatN2 is better than N1 --Need to take not just the distance until now, but also distance to goal --Computing true distance to goal is as hard as the full search --So, try “bounds” h(n) prioritize nodes in terms of f(n) = g(n) +h(n) two bounds: h1(n) <= h*(n) <= h2(n) Which guarantees optimality?--h1(n) <= h2(n) <= h*(n) Which is better function?
Admissibility
Informedness
A*