Post on 06-Apr-2018
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
1/16
IF-UTAMA 1
IF-UTAMA 1
Jurusan Teknik Inform atika Universitas Widyatama
Heuristic Search
Pertemuan : 3
Dosen Pembina :Sriyani Violina
Danang Junaedi
IF-UTAMA 2
Pertemuan ini mempelajari bagaimana memecahkan
suatu masalah dengan teknik searching.
Metode searching yang dibahas pada pertemuan ini
adalah heuristic search atau informed search yang
terdiri dari: Generate-and Test
Hill Climbing
Best-First Search Greedy Best-First-Search
A*
Iterative Deeping A*
Deskripsi
IF-UTAMA 3
Limitations of uninformed search
8-puzzle
Avg. solution cost is about 22 steps
branching factor ~ 3
Exhaustive search to depth 22: 3.1 x 1010 states
E.g., d=12, IDS expands 3.6 million states on average
[24 puzzle has 1024 states (much worse)]
IF-UTAMA 4
Recall tree search
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
2/16
IF-UTAMA 2
IF-UTAMA 5
Recall tree search
This strategy is whatdifferentiates different
search algorithms
IF-UTAMA 6
Heuristic
Merriam-Webster's Online Dictionary
Heuristic (pron. \hyu-ris-tik\): adj. [from Greekheuriskein to discover.] involving orserving as an aid to learning, discovery, or problem-solving by experimental andespecially trial-and-error methods
The Free On-line Dictionary of Computing (15Feb98)
heuristic 1. A rule of thumb, simplification or educated guess thatreduces or limits the search for solutions in domains that are difficult and poorlyunderstood. Unlike algorithms, heuristics do not guarantee feasible solutions andare often used with no theoretical guarantee. 2. approximationalgorithm.
From WordNet (r) 1.6
heuristic adj 1: (computer science) relating to or using a heuristic rule 2: of or relatingto a general formulation that serves to guide investigation [ant: algorithmic] n : acommonsense rule (or set of rules) intended to increase the probability of solvingsome problem [syn: heuristic rule, heuristic program]
IF-UTAMA 7
Informed methods add
domain-specific information
Add domain-specific information to select the best
path along which to continue searching
Define a heuristic function h(n) that estimates the
goodness of a node n. Specifically, h(n) = estimated cost (or distance) of minimal
cost path from n to a goal state.
The heuristic function is an estimate of how close we
are to a goal, based on domain-specific information
that is computable from the current state description.
IF-UTAMA 8
Heuristic function
Heuristic:
Definition: using rules of thumb to find answers
Heuristic function h(n)
Estimate of (optimal) cost from n to goal h(n) = 0 if n is a goal node
Example: straight line distance from n to Bucharest
Note that this is not the true state-space distance
It is an estimate actual state-space distance can be higher
Provides problem-specific knowledge to the search algorithm
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
3/16
IF-UTAMA 3
IF-UTAMA 9
Heuristic functions for 8-puzzle
8-puzzle
Avg. solution cost is about 22 steps
branching factor ~ 3
Exhaustive search to depth 22:
3.1 x 1010 states.
A good heuristic function can reduce the search process.
Two commonly used heuristics
h1 = the number of misplaced tiles h1(s)=8
h2 = the sum of the distances of the tiles from their goal positions
(Manhattan distance).
h2(s)=3+1+2+2+2+3+3+2=18
IF-UTAMA 10
Heuristics
All domain knowledge used in the search is encoded in theheuristic function h().
Heuristic search is an example of a weak method because of
the limited way that domain-specific information is used to
solve the problem.
Examples:
Missionaries and Cannibals: Number of people on starting river bank
8-puzzle: Number of tiles out of place
8-puzzle: Sum of distances each tile is from its goal position
In general:
h(n) 0 for all nodes n
h(n) = 0 implies that n is a goal node
h(n) = implies that n is a dead-end that can never lead to a goal
IF-UTAMA 11
Weak vs. strong methods
We use the term weak methods to refer to methods that are
extremelygeneraland not tailored to a specific situation.
Examples of weak methods include
Means-ends analysis is a strategy in which we try to represent the current
situation and where we want to end up and then look for ways to shrink thedifferences between the two.
Space splitting is a strategy in which we try to list the possible solutions to
a problem and then try to rule out classes of these possibilities.
Subgoaling means to split a large problem into several smaller ones that
can be solved one at a time.
Called weak methods because they do not take advantage of
more powerful domain-specific heuristics
IF-UTAMA 12
When to use ?
The input data are limited or inexact.
Optimization model can not be used as for complexreality.
Reliable and exact algorithm is not available. It is possible to improve the efficiency of the
optimization process.
Symbolic rather than numerical processing is involved.
Quick decision must be made .
Computerization is not feasible.
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
4/16IF-UTAMA 4
IF-UTAMA 13
Advantages
Simple to understand.
Easier to implement and explain.
Save formulation time.
Save computer programming and storage requirements.
Save computational time and thus real time in decision making.
Often produce multiple acceptable solution.
Usually it is possible to state a theoretical or empirical measure ofthe solution quality.
Incorporate intellegence to guide the search.
It is possible to apply efficient heuristics to models that could besolved with mathematical programming.
IF-UTAMA 14
Disadvantages
An optimal solution can not be guaranteed.
There may be too many exceptions to the rules.
Sequential decision choices may fail to anticipate
the future consequences of each choice.
The interdependencies of one part pf a system
can sometime have a profound influence on the
whole system.
IF-UTAMA 15
Generate-and-Test
It is a depth first search procedure since complete solutions mustbe generated before they can be tested.
In its most systematic form, it is simply an exhaustive search ofthe problem space.
Operate by generating solutions randomly. Also called as British Museum algorithm
If a sufficient number of monkeys were placed in front of a set oftypewriters, and left alone long enough, then they wouldeventually produce all the works of shakespeare.
Dendral: which infers the struture of organic compounds usingNMR spectrogram. It uses plan-generate-test.
IF-UTAMA 16
Generate-and-Test Algorithm
1. Generate a possible solution. For some problems, this
means generating a particular point in the problem
space. For others it means generating a path from a
start state2. Test to see if this is actually a solution by comparing
the chosen point or the endpoint of the chosen path to
the set of acceptable goal states.
3. If a solution has been found, quit, Otherwise return to
step 1.
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
5/16IF-UTAMA 5
IF-UTAMA 17
Hill Climbing
Is a variant of generate-and test in which feedback fromthe test procedure is used to help the generator decidewhich direction to move in search space.
The test function is augmented with a heuristic functionthat provides an estimate of how close a given state is tothe goal state.
Computation of heuristic function can be done withnegligible amount of computation.
Hill climbing is often used when a good heuristicfunction is available for evaluating states but when noother useful knowledge is available.
IF-UTAMA 18
Hill climbing
Perform depth-first,
BUT:
instead of left-to-right
selection,
FIRST select the child
with thebest heuristicvalue
SS
DD
AA EE
BB FF
GG
SS
AA
Example: using the straight-line distance:
8.98.910.410.4
10.410.4 6.96.9
6.76.7 3.03.0
IF-UTAMA 19
Hill climbing algorithm
1.1. QUEUEQUEUE
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
6/16IF-UTAMA 6
IF-UTAMA 21
Simple Hill Climbing
The key difference between Simple Hill climbing
and Generate-and-test is the use of evaluation
function as a way to inject task specific
knowledge into the control process.
Is on state better than another ? For this algorithm
to work, precise definition of better must be
provided.
IF-UTAMA 22
Steepest-Ascent Hill Climbing
This is a variation of simple hill climbing which
considers all the moves from the current state and
selects the best one as the next state.
Also known as Gradient search
IF-UTAMA 23
Algorithm: Steepest-Ascent Hill
Climbing
1. Evaluate the initial state. If it is also a goal state, then return itand quit. Otherwise, continue with the initial state as the currentstate.
2. Loop until a solution is found or until a complete iteration
produces no change to current state:a. Let SUCC be a state such that any possible successor of the current statewill be better than SUCC
b. For each operator that applies to the current state do:
i. Apply the operator and generate a new state
ii. Evaluate the new state. If is is a goal state, then return it and quit. If not,compare it to SUCC. If it is better, then set SUCC to this state. If it is not
better, leave SUCC alone.
c. If the SUCC is better than the current state, then set current state toSYCC,
IF-UTAMA 24
Hill climbing example2 8 3
1 6 4
7 5
2 8 31 4
7 6 5
2 3
1 8 4
7 6 5
1 38 4
7 6 5
2
3
1 8 4
7 6 5
2
1 3
8 4
7 6 5
2
start goal
-5
h = -3
h = -3
h = -2
h = -1
h = 0h = -4
-5
-4
-4-3
-2
h(n) = -(number of tiles out of place)
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
7/16IF-UTAMA 7
IF-UTAMA 25
Best First Search
Combines the advantages of bith DFS and BFS into asingle method.
DFS is good because it allows a solution to be foundwithout all competing branches having to be expanded.
BFS is good because it does not get branches on deadend paths.
One way of combining the tow is to follow a single pathat a time, but switch paths whenever some competing
path looks more promising than the current one does.
IF-UTAMA 26
BFS
At each step of the BFS search process, we select the mostpromising of the nodes we have generated so far.
This is done by applying an appropriate heuristic function to eachof them.
We then expand the chosen node by using the rules to generate itssuccessors
Similar to Steepest ascent hill climbing with two exceptions: In hill climbing, one move is selected and all the others are rejected, never
to be reconsidered. This produces the straightline behaviour that ischaracteristic of hill climbing.
In BFS, one move is selected, but the others are kept around so that theycan be revisited later if the selected path becomes less promising. Further,the best available state is selected in the BFS, even if that state has a valuethat is lower than the value of the state that was just explored. This contrastswith hill climbing, which will stop if there are no successor states withbetter values than the current state.
IF-UTAMA 27
Algorithm: BFS
1. Start with OPEN containing just the initial state
2. Until a goal is found or there are no nodes left onOPEN do:
a. Pick the best node on OPEN
b. Generate its successors
c. For each successor do:
i. If it has not been generated before, evaluate it, add it to OPEN, andrecord its parent.
ii. If it has been generated before, change the parent if this new path isbetter than the previous one. In that case, update the cost of getting tothis node and to any successors that this node may already have.
IF-UTAMA 28
BFS : simple explanation
It proceeds in steps, expanding one node at each step, until itgenerates a node that corresponds to a goal state.
At each step, it picks the most promising of the nodes that have sofar been generated but not expanded.
It generates the successors of the chosen node, applies theheuristic function to them, and adds them to the list of open nodes,after checking to see if any of them have been generated before.
By doing this check, we can guarantee that each node onlyappears once in the graph, although many nodes may point to it asa successor.
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
8/16IF-UTAMA 8
IF-UTAMA 29
BFS Ilustration
Step 1 Step 2 Step 3
Step 4 Step 5
A A
B C D3 5 1
A
B C D3 5 1
E F46A
B C D3 5 1
E F46
G H6 5
A
B C D3 5 1
E F46
G H6 5
A A2 1
IF-UTAMA 30
Greedy best-first search
Special case of best-first search
Uses h(n) = heuristic function as its evaluation
function
Expand the node that appears closest to goal
IF-UTAMA 31
Romania with step costs in km
IF-UTAMA 32
Greedy best-first search example
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
9/16IF-UTAMA 9
IF-UTAMA 33
Greedy best-first search example
IF-UTAMA 34
Greedy best-first search example
IF-UTAMA 35
Greedy best-first search example
IF-UTAMA 36
Optimal Path
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
10/16IF-UTAMA 10
IF-UTAMA 37
Properties of greedy best-first search
Complete?
Not unless it keeps track of all states visited
Otherwise can get stuck in loops (just like DFS)
Optimal?
No we just saw a counter-example
Time?
O(bm), can generate all nodes at depth m before finding solution
m = maximum depth of search space
Space? O(bm) again, worst case, can generate all nodes at depth m before finding
solution
IF-UTAMA 38
A* Search
Expand node based on estimate of total path cost
through node
Evaluation functionf(n) = g(n) + h(n)
g(n) = cost so far to reach n
h(n) = estimated cost from n to goal
f(n) = estimated total cost of path through n to goal
Efficiency of search will depend on quality of
heuristic h(n)
IF-UTAMA 39
A* Algorithm
BFS is a simplification of A* Algorithm
Presented by Hart et al
Algorithm uses: f: Heuristic function that estimates the merits of each node we generate.
This is sum of two components, g and h and f represents an estimate of
the cost of getting from the initial state to a goal state along with the paththat generated the current node.
g : The function g is a measure of the cost of getting from initial state to thecurrent node.
h : The function h is an estimate of the additional cost of getting from thecurrent node to a goal state.
OPEN
CLOSED
IF-UTAMA 40
A* Algorithm
1. Start with OPEN containing only initial node. Set that nodes gvalue to 0, its h value to whatever it is, and its f value to h+0or h. Set CLOSED to empty list.
2. Until a goal node is found, repeat the following procedure: Ifthere are no nodes on OPEN, report failure. Otherwise picj thenode on OPEN with the lowest f value. Call it BESTNODE.Remove it from OPEN. Place it in CLOSED. See if theBESTNODE is a goal state. If so exit and report a solution.Otherwise, generate the successors of BESTNODE but do notset the BESTNODE to point to them yet.
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
11/16IF-UTAMA 11
IF-UTAMA 41
A* Algorithm ( contd)
For each of the SUCCESSOR, do the following:
a. Set SUCCESSOR to point back to BESTNODE. Thesebackwards links will make it possible to recover the path once asolution is found.
b. Compute g(SUCCESSOR) = g(BESTNODE) + the cost ofgetting from BESTNODE to SUCCESSOR
c. See if SUCCESSOR is the same as any node on OPEN. If socall the node OLD.
d. If SUCCESSOR was not on OPEN, see if it is on CLOSED. Ifso, call the node on CLOSED OLD and add OLD to the list of
BESTNODEs successors.e. If SUCCESSOR was not already on either OPEN or CLOSED,
then put it on OPEN and add it to the list of BESTNODEssuccessors. Compute f(SUCCESSOR) = g(SUCCESSOR) +h(SUCCESSOR)
IF-UTAMA 42
A* search example
IF-UTAMA 43
A* search example
IF-UTAMA 44
A* search example
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
12/16IF-UTAMA 12
IF-UTAMA 45
A* search example
IF-UTAMA 46
A* search example
IF-UTAMA 47
A* search example
IF-UTAMA 48
Properties of A*
Complete?
Yes (unless there are infinitely many nodes with ff(G) )
Optimal?
Yes
Also optimally efficient: No other optimal algorithm will expand fewer nodes, for a given heuristic
Time?
Exponential in worst case
Space?
Exponential in worst case
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
13/16
IF-UTAMA 13
IF-UTAMA 49
Improving A* - memory-bounded heuristic
search
Iterative-deepening A* (IDA*) Using f-cost(g+h) rather than the depth
Cutoff value is the smallest f-cost ofany node that exceeded the cutoff onthe previous iteration; keep these nodes only
Space complexity O(bd)
Recursive best-first search (RBFS)
Best-first search using only linear space (Fig 4.5)
It replaces the f-value of each node along the path with the best f-value ofits children (Fig 4.6)
Space complexity O(bd) with excessive node regeneration
Simplified memory bounded A* (SMA*) IDA* and RBFS use too little memory excessive node regeneration
Expanding the best leaf until memory is full
Dropping the worst leaf node (highest f-value) by backing up to its parent
IF-UTAMA 50
Iterative Deepening A* (IDA*)
Idea: Reduce memory requirement of A*by applying cutoff on values of f
Consistent heuristic function h
Algorithm IDA*:1. Initialize cutoff to f(initial-node)
2. Repeat:a. Perform depth-first search by expanding all
nodes N such that f(N) cutoff
b. Reset cutoff to smallest value f of non-expanded(leaf) nodes
IF-UTAMA 51
8-Puzzle
4
6
f(N) = g(N) + h(N)with h(N) = number of misplaced tiles
Cutoff=4
IF-UTAMA 52
8-Puzzle
4
4
6
Cutoff=4
6
f(N) = g(N) + h(N)with h(N) = number of misplaced tiles
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
14/16
IF-UTAMA 14
IF-UTAMA 53
8-Puzzle
4
4
6
Cutoff=4
6
5
f(N) = g(N) + h(N)with h(N) = number of misplaced tiles
IF-UTAMA 54
8-Puzzle
4
4
6
Cutoff=4
6
5
5
f(N) = g(N) + h(N)with h(N) = number of misplaced tiles
IF-UTAMA 55
4
8-Puzzle
4
6
Cutoff=4
6
5
56
f(N) = g(N) + h(N)with h(N) = number of misplaced tiles
IF-UTAMA 56
8-Puzzle
4
6
Cutoff=5
f(N) = g(N) + h(N)with h(N) = number of misplaced tiles
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
15/16
IF-UTAMA 15
IF-UTAMA 57
8-Puzzle
4
4
6
Cutoff=5
6
f(N) = g(N) + h(N)with h(N) = number of misplaced tiles
IF-UTAMA 58
8-Puzzle
4
4
6
Cutoff=5
6
5
f(N) = g(N) + h(N)with h(N) = number of misplaced tiles
IF-UTAMA 59
8-Puzzle
4
4
6
Cutoff=5
6
5
7
f(N) = g(N) + h(N)with h(N) = number of misplaced tiles
IF-UTAMA 60
8-Puzzle
4
4
6
Cutoff=5
6
5
7
5
f(N) = g(N) + h(N)with h(N) = number of misplaced tiles
8/3/2019 Ai Sesi03 Searching Heuristic Search 4slide
16/16
IF-UTAMA 16
IF-UTAMA 61
8-Puzzle
4
4
6
Cutoff=5
6
57
5 5
f(N) = g(N) + h(N)with h(N) = number of misplaced tiles
IF-UTAMA 62
8-Puzzle
4
4
6
Cutoff=5
6
57
5 5
f(N) = g(N) + h(N)
with h(N) = number of misplaced tiles
5
IF-UTAMA 63
Advantages/Drawbacks of IDA*
Advantages: Still complete and optimal
Requires less memory than A*
Avoid the overhead to sort the fringe Drawbacks:
Cant avoid revisiting states not on thecurrent path Essentially a DFS
Available memory is poorly used( memory-bounded search, see R&N p. 101-104)
IF-UTAMA 64
Referensi
1. Suyanto.2007.Artificial Intelligence .Informatika. Bandung
2. -.2009. Heuristic (Informed) Search[online].http://www.cse.fau.edu/~xqzhu/courses/cap4630_09spring/lectures/Heuristic_Search_Part2.ppt.Tanggal Akses : 12 Februari 2011
3. Tim Finin et.al..2011.Informed Search [online],http://www.cs.swarthmore.edu/~eeaton/teaching/cs63/slides/InformedS
earchPart1.ppt, Tanggal Akses : 12 Februari 20114. Surma Mukhopadhyay.-.Heuristic
Search[online].url:http://student.bus.olemiss.edu/files/Conlon/Others/BUS669/Notes/Chapter4/HEURISTIC%20%20%20SEARCH.ppt.Tanggal Akses: 12 Februari 2011
5. Dr. Suthikshn Kumar.-. Heuristic SearchTechniques[online].url:http://www.cs.rpi.edu/~beevek/files/ai-heuristic-search.pdf.Tanggal Akses: 12 Februari 2011
6. -.2007. Informed/HeuristicSearch[online].url:http://www.ics.uci.edu/~smyth/courses/cs271/topic3_heuristicsearch.ppt.Tanggal Akses: 12 Februari 2011
7. Dan sumber-sumber lain yang terkait