Post on 22-Dec-2015
1
Midterm Review
cmsc421 Fall 2005
2
Outline
• Review the material covered by the midterm
• Questions?
3
Subjects covered so far…
• Search: Blind & Heuristic
• Constraint Satisfaction
• Adversarial Search
• Logic: Propositional and FOL
4
… and subjects to be covered
• planning
• uncertainty
• learning
• and a few more…
5
Search
6
Stating a Problem as a Search Problem
State space S Successor function: x S SUCCESSORS(x) 2S
Arc cost Initial state s0
Goal test: xS GOAL?(x) =T or F A solution is a path joining the initial to a goal node
S1
3 2
7
Basic Search Concepts
Search tree Search node Node expansion Fringe of search tree Search strategy: At each stage it
determines which node to expand
8
Search Algorithm
1. If GOAL?(initial-state) then return initial-state2. INSERT(initial-node,FRINGE)3. Repeat:
a. If empty(FRINGE) then return failureb. n REMOVE(FRINGE)c. s STATE(n)d. For every state s’ in SUCCESSORS(s)
i. Create a new node n’ as a child of nii. If GOAL?(s’) then return path or goal
stateiii. INSERT(n’,FRINGE)
9
Performance Measures
CompletenessA search algorithm is complete if it finds a solution whenever one exists[What about the case when no solution exists?]
OptimalityA search algorithm is optimal if it returns an optimal solution whenever a solution exists
ComplexityIt measures the time and amount of memory required by the algorithm
10
Blind vs. Heuristic Strategies
Blind (or un-informed) strategies do not exploit state descriptions to select which node to expand next
Heuristic (or informed) strategies exploits state descriptions to select the “most promising” node to expand
11
Blind Strategies
Breadth-first• Bidirectional
Depth-first• Depth-limited • Iterative deepening
Uniform-Cost(variant of breadth-first)
Arc cost = c(action) 0
12
Comparison of Strategies
Breadth-first is complete and optimal, but has high space complexity
Depth-first is space efficient, but is neither complete, nor optimal
Iterative deepening is complete and optimal, with the same space complexity as depth-first and almost the same time complexity as breadth-first
13
Avoiding Revisited States
Requires comparing state descriptions Breadth-first search:
• Store all states associated with generated nodes in CLOSED
• If the state of a new node is in CLOSED, then discard the node
14
Avoiding Revisited States
Depth-first search: Solution 1:
– Store all states associated with nodes in current path in CLOSED
– If the state of a new node is in CLOSED, then discard the node
Only avoids loops
Solution 2:– Store of all generated states in CLOSED– If the state of a new node is in CLOSED, then discard
the node Same space complexity as breadth-first !
15
Uniform-Cost Search (Optimal)
Each arc has some cost c > 0 The cost of the path to each fringe node N is g(N) = costs of arcs The goal is to generate a solution path of minimal cost The queue FRINGE is sorted in increasing cost
Need to modify search algorithm
S0
1A
5B
15CS G
A
B
C
5
1
15
10
5
5
G11
G10
16
Modified Search Algorithm
1. INSERT(initial-node,FRINGE)2. Repeat:
a. If empty(FRINGE) then return failureb. n REMOVE(FRINGE)c. s STATE(n)d. If GOAL?(s) then return path or goal statee. For every state s’ in SUCCESSORS(s)
i. Create a node n’ as a successor of nii. INSERT(n’,FRINGE)
17
Avoiding Revisited States in Uniform-Cost Search
When a node N is expanded the path to N is also the best path from the initial state to STATE(N) if it is the first time STATE(N) is encountered.
So:
• When a node is expanded, store its state into CLOSED
• When a new node N is generated:– If STATE(N) is in CLOSED, discard N
– If there exits a node N’ in the fringe such that STATE(N’) = STATE(N), discard the node – N or N’ – with the highest-cost path
18
Best-First Search It exploits state description to
estimate how promising each search node is
An evaluation function f maps each search node N to positive real number f(N)
Traditionally, the smaller f(N), the more promising N
Best-first search sorts the fringe in increasing f
19
The heuristic function h(N) estimates the distance of STATE(N) to a goal state
Its value is independent of the current search tree; it depends only on STATE(N) and the goal test
Example:
h1(N) = number of misplaced tiles = 6
Heuristic Function
14
7
5
2
63
8
STATE(N)
64
7
1
5
2
8
3
Goal state
20
Classical Evaluation Functions
h(N): heuristic function[Independent of search tree]
g(N): cost of the best path found so far between the initial node and N[Dependent on search tree]
f(N) = h(N) greedy best-first search
f(N) = g(N) + h(N)
21
Can we Prove Anything?
If the state space is finite and we discard nodes that revisit states, the search is complete, but in general is not optimal
If the state space is finite and we do not discard nodes that revisit states, in general the search is not complete
If the state space is infinite, in general the search is not complete
22
Admissible Heuristic
Let h*(N) be the cost of the optimal path from N to a goal node
The heuristic function h(N) is admissible if: 0 h(N) h*(N)
An admissible heuristic function is always optimistic !
• Note: G is a goal node h(G) = 0
23
A* Search(most popular algorithm in AI)
f(N) = g(N) + h(N), where:• g(N) = cost of best path found so far to N• h(N) = admissible heuristic function
for all arcs: 0 < c(N,N’) “modified” search algorithm is used
Best-first search is then called A* search
24
Result #1
A* is complete and optimal
25
Experimental Results 8-puzzle with:
h1 = number of misplaced tiles
h2 = sum of distances of tiles to their goal positions
Random generation of many problem instances Average effective branching factors (number of
expanded nodes):d IDS A1* A2*
2 2.45 1.79 1.79
6 2.73 1.34 1.30
12 2.78 (3,644,035)
1.42 (227) 1.24 (73)
16 -- 1.45 1.25
20 -- 1.47 1.27
24 -- 1.48 (39,135)
1.26 (1,641)
26
Iterative Deepening A* (IDA*)
Idea: Reduce memory requirement of A* by applying cutoff on values of f
Consistent heuristic h Algorithm IDA*:
1. Initialize cutoff to f(initial-node)2. Repeat:
a. Perform depth-first search by expanding all nodes N such that f(N) cutoff
b.Reset cutoff to smallest value f of non-expanded (leaf) nodes
27
Local Search
Light-memory search method No search tree; only the current state
is represented! Only applicable to problems where the
path is irrelevant (e.g., 8-queen), unless the path is encoded in the state
Many similarities with optimization techniques
28
Search problems
Blind search
Heuristic search: best-first and A*
Construction of heuristics Local searchVariants of A*
29
When to Use Search Techniques?
1)The search space is small, and• No other technique is available• Developing a more efficient technique is
not worth the effort
2)The search space is large, and• No other technique is available, and• There exist “good” heuristics
30
Constraint Satisfaction
31
Constraint Satisfaction ProblemConstraint Satisfaction Problem
• Set of variables {X1, X2, …, Xn}• Each variable Xi has a domain Di of possible
values• Usually Di is discrete and finite• Set of constraints {C1, C2, …, Cp}• Each constraint Ck involves a subset of variables
and specifies the allowable combinations of values of these variables
• Goal: Assign a value to every variable such that all constraints are satisfied
32
CSP as a Search ProblemCSP as a Search Problem
• Initial state: empty assignment
• Successor function: a value is assigned to any unassigned variable, which does not conflict with the currently assigned variables
• Goal test: the assignment is complete
• Path cost: irrelevant
33
QuestionsQuestions
1. Which variable X should be assigned a value next?1. Minimum Remaining Values/Most-constrained
variable
2. In which order should its domain D be sorted?1. least constrained value
3. How should constraints be propagated?1. forward checking
2. arc consistency
34
Adversarial Search
35
Specific Setting Two-player, turn-taking, deterministic, fully
observable, zero-sum, time-constrained game
State space Initial state Successor function: it tells which actions can
be executed in each state and gives the successor state for each action
MAX’s and MIN’s actions alternate, with MAX playing first in the initial state
Terminal test: it tells if a state is terminal and, if yes, if it’s a win or a loss for MAX, or a draw
All states are fully observable
36
Choosing an Action: Basic Idea
1) Using the current state as the initial state, build the game tree uniformly to the maximal depth h (called horizon) feasible within the time limit
2) Evaluate the states of the leaf nodes3) Back up the results from the leaves to
the root and pick the best action assuming the worst from MIN
Minimax algorithm
37
Minimax Algorithm
1. Expand the game tree uniformly from the current state (where it is MAX’s turn to play) to depth h
2. Compute the evaluation function at every leaf of the tree
3. Back-up the values from the leaves to the root of the tree as follows:a. A MAX node gets the maximum of the evaluation of
its successorsb. A MIN node gets the minimum of the evaluation of
its successors
4. Select the move toward a MIN node that has the largest backed-up value
38
Alpha-Beta Pruning
Explore the game tree to depth h in depth-first manner
Back up alpha and beta values whenever possible
Prune branches that can’t lead to changing the final decision
39
Example
The beta value of a MINnode is an upper bound onthe final backed-up value.It can never increase
1
= 1
2
40
Example
= 1
The alpha value of a MAXnode is a lower bound onthe final backed-up value.It can never decrease
1
= 1
2
41
Alpha-Beta Algorithm
Update the alpha/beta value of the parent of a node N when the search below N has been completed or discontinued
Discontinue the search below a MAX node N if its alpha value is the beta value of a MIN ancestor of N
Discontinue the search below a MIN node N if its beta value is the alpha value of a MAX ancestor of N
42
Logical Representations and Theorem Proving
43
Logical Representations
• Propositional logic
• First-order logic
• syntax and semantics
• models, entailment, etc.
44
The Game
Rules:1. Red goes first2. On their turn, a player must move their piece3. They must move to a neighboring square, or if their
opponent is adjacent to them, with a blank on the far side, they can hop over them
4. The player that makes it to the far side first wins.
45
Logical Inference
• Propositional: truth tables or resolution
• FOL: resolution + unification
• strategies:– shortest clause first– set of support
46
Questions?