Informed Search New

download Informed Search New

of 71

Transcript of Informed Search New

  • 8/2/2019 Informed Search New

    1/71

    Informed Searchnformed SearchRussell and Norvig: Ch. 4.1 - 4.3

    Rich and Knight

  • 8/2/2019 Informed Search New

    2/71

    Outline

    Informed = use problem-specific knowledge

    Which search strategies?

    Best-first search and its variantsHeuristic functions? How to invent them

    Local search and optimization Hill climbing, simulated annealing, local beam

    search,

  • 8/2/2019 Informed Search New

    3/71

    Tree Search Algorithm

    function TREE-SEARCH(problem,fringe) return a solution orfailure

    fringe INSERT(MAKE-NODE(INITIAL-STATE[problem]),

    fringe)loop doif EMPTY?(fringe) then return failurenodeREMOVE-FIRST(fringe)

    if GOAL-TEST[problem] applied to STATE[node] succeedsthen return SOLUTION(node)

    fringe INSERT-ALL(EXPAND(node, problem), fringe)

  • 8/2/2019 Informed Search New

    4/71

    Tree Search Algorithm (2)

    function EXPAND(node,problem) return a set of nodessuccessors the empty set

    for each in SUCCESSOR-FN[problem](STATE[node]) do

    sa new NODE

    STATE[s] result

    PARENT-NODE[s] node

    ACTION[s] action

    PATH-COST[s]

    PATH-COST[node]+ STEP-COST(node,action,s)

    DEPTH[s] DEPTH[node]+1

    add sto successors

    returnsuccessors

  • 8/2/2019 Informed Search New

    5/71

    Previously: Graph search algorithm

    function GRAPH-SEARCH(problem,fringe) return a solution or failureclosedan empty setfringe INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe)loop doif EMPTY?(fringe) then return failure

    node

    REMOVE-FIRST(fringe)i f GOAL-TEST[problem] applied to STATE[node] succeedsthen return SOLUTION(node)i f STATE[node] is not in closedthenadd STATE[node] to closedfringe INSERT-ALL(EXPAND(node, problem), fringe)

    A strategy is defined by picking the order of nodeexpansion

    Closed list:-> it stores every expanded nodeOpen List:-> fringe of unexpanded nodes

  • 8/2/2019 Informed Search New

    6/71

    Search Algorithms

    Blind search BFS, DFS, uniform cost no notion concept of the right direction

    can only recognize goal once its achieved

    Heuristic search we have rough idea of how

    good various states are, and use thisknowledge to guide our search

  • 8/2/2019 Informed Search New

    7/71

    Best-first search

    General approach of informed search: Best-first search: node is selected for expansion

    based on an evaluation function f(n)

    Idea: evaluation function measures distanceto the goal. Choose node which appearsbest

    Implementation: fringe is queue sorted in decreasing order of

    desirability.

    Special cases: Greedy search, A* search

  • 8/2/2019 Informed Search New

    8/71

    Informed Search

    Add domain-specific information to select thebest path along which to continue searching

    Define a heuristic function, h(n), thatestimates the goodness of a node n.Specifically, h(n) = estimated cost (ordistance) of minimal cost path from n to agoal state.If n= goal node then h(n)=0.The heuristic function is an estimate, basedon domain-specific information that iscomputable from the current state

    description, of how close we are to a goal

  • 8/2/2019 Informed Search New

    9/71

    Greedy Best-First Searchreedy Best-First Search

    f(N) = h(N) greedy best-first

    Example of route finding problem inRomania

    Heuristic:- Straight line distance ( h SLD )Greedy best-first search expands thenode that appears to be closest to goal

  • 8/2/2019 Informed Search New

    10/71

    Algorithm Greedy Best firstsearch

    1. Start with OPEN containing just the initial state.2. Until a goal is found or there are no nodes left on

    open do:

    (a) Pick the best node on OPEN(b) Generate its successors

    (c) For each successor do:

    (i) if it has not been generated before, evaluate it, addit to OPEN and(ii) if it has been generated before, change the parentif this new path is better than the previous one . In thatcase update the cost of getting to this node and to anysuccessors that this node may already have.

  • 8/2/2019 Informed Search New

    11/71

    Example Greedy BFSMap of Romania

  • 8/2/2019 Informed Search New

    12/71

    Value of hSLD to BucharestCity Heuristic City Heuristic

    Arad 366 Mehandia 241

    Bucharest 0 Neamt 234

    Craiova 160 Oradea 380

    Dobreta 242 Pitesti 100Eforie 161 Rimnicu Vilcea 193

    Fagaras 176 Sibiu 253

    Giurgiu 77 Timisoara 329

    Hirsova 151 Urziceni 80

    Iasi 226 Vaslui 199

    Lugoj 244 Zerind 374

  • 8/2/2019 Informed Search New

    13/71

    Example

  • 8/2/2019 Informed Search New

    14/71

  • 8/2/2019 Informed Search New

    15/71

  • 8/2/2019 Informed Search New

    16/71

    Attention: path[Arad, Sibiu, Fagaras] is not optimal!

  • 8/2/2019 Informed Search New

    17/71

    17

    Properties of greedy best-firstsearch

    Complete? No can get stuck in loops;Yes, if we can avoid repeated states

    Time? O(bm), but a good heuristic cangive dramatic improvement

    Space? O(bm)-- keeps all nodes in

    memoryOptimal? No

  • 8/2/2019 Informed Search New

    18/71

    A* search

    A* search combines Uniform-cost and GreedyBest-first SearchEvaluation function:

    f(N) = g(N) + h(N)where: g(N) is the cost of the best path found so far to N h(N) is an admissibleheuristic

    f(N) is the estimated cost of cheapest solutionthrough N

    0 < c(N,N) (no negative cost steps).Order the nodes in the fringe in increasing

    values off(N)

  • 8/2/2019 Informed Search New

    19/71

    19

    A* search example

  • 8/2/2019 Informed Search New

    20/71

    20

    A* search example

  • 8/2/2019 Informed Search New

    21/71

    21

    A* search example

  • 8/2/2019 Informed Search New

    22/71

    22

    A* search example

  • 8/2/2019 Informed Search New

    23/71

    23

    A* search example

  • 8/2/2019 Informed Search New

    24/71

    24

    A* search example

  • 8/2/2019 Informed Search New

    25/71

    Admissible heuristicdmissible heuristicLet h*(N) be the true cost of the optimal path from N to a goalnode

    Heuristic h(N) is admissible if:

    0 h(N) h*(N)

    An admissible heuristic is always optimistic

    Ifh(n) is admissible, A*using TREE-SEARCH is optimal

  • 8/2/2019 Informed Search New

    26/71

    Optimality of A*(standard proof)

    Suppose suboptimal goal G2 in the queue.

    Let nbe an unexpanded node on a shortest to optimalgoal G.

    f(G2) = g(G2) since h(G2)=0

    =g(G2)>C* C* is cost of optimal solution

    For node n that is optimal solution we know

    f(n)=g(n) +h(n)

  • 8/2/2019 Informed Search New

    27/71

    BUT graph search

    Discards new paths to repeated state. Previous proof breaks down

    Solution:Add extra bookkeeping i.e. remove more

    expensive of two paths. Ensure that optimal path to any repeated

    state is always first followed. Extra requirement on h(n): consistency

    (monotonicity)

  • 8/2/2019 Informed Search New

    28/71

    Consistency

    if h(n) is consistent then the

    values of f(n) along any path arenondecreasing.

    Proof:

    h(n)c(n,a,n')+h(n')

    If n is successor of n then g(n)=g(n) + c(n,a,n) for some af(n) = g(n)+h(n)

    =g(n)+c(n,a,n)+h(n)>=g(n) + h(n)>=f(n)

  • 8/2/2019 Informed Search New

    29/71

    Optimality of A*(more useful)

    A* expands nodes in order of increasing fvalue

    Contours can be drawn in state space Uniform-cost search adds circles.

    F-contours are gradually

    Added:

    1) nodes with f(n)

  • 8/2/2019 Informed Search New

    30/71

    A* search, evaluation

    Completeness: YES Since bands of increasing fare added

    Unless there are infinitely many nodes withf

  • 8/2/2019 Informed Search New

    31/71

    A* search, evaluation

    Completeness: YES

    Time complexity:

    Number of nodes expanded is still exponential inthe length of the solution.

  • 8/2/2019 Informed Search New

    32/71

    A* search, evaluation

    Completeness: YES

    Time complexity: (exponential with path

    length)Space complexity: It keeps all generated nodes in memory

    Hence space is the major problem not time

  • 8/2/2019 Informed Search New

    33/71

    A* search, evaluation

    Completeness: YES

    Time complexity: (exponential with pathlength)

    Space complexity:(all nodes are stored)

    Optimality: YES Cannot expand fi+1 until fi is finished.

    A* expands all nodes with f(n)< C* A* expands some nodes with f(n)=C* A* expands no nodes with f(n)>C*

  • 8/2/2019 Informed Search New

    34/71

    Memory-bounded heuristicsearch

    Some solutions to A* space problems(maintain completeness and optimality)

    Iterative-deepening A* (IDA*) Here cutoff information is the f-cost (g+h) instead ofdepth

    Recursive best-first search(RBFS) Recursive algorithm that attempts to mimic standard

    best-first search with linear space.

    (simple) Memory-bounded A* ((S)MA*) Drop the worst-leaf node when memory is full

  • 8/2/2019 Informed Search New

    35/71

    Recursive best-first search

    function RECURSIVE-BEST-FIRST-SEARCH(problem) return a solution or failurereturn RFBS(problem,MAKE-NODE(INITIAL-STATE[problem]),)

    function RFBS( problem, node, f_limit) return a solution or failure and a new f-costlimit

    if GOAL-TEST[problem](STATE[node]) then return nodesuccessorsEXPAND(node, problem)

    ifsuccessorsis empty then return failure, for eachsinsuccessorsdo

    f[s] max(g(s)+ h(s), f[node])

    repeatbest the lowest f-value node in successors

    iff[best] > f_limitthen return failure, f[best]alternative the second lowest f-value among successors

    result, f[best] RBFS(problem, best, min(f_limit, alternative))

    ifresult failure then returnresult

  • 8/2/2019 Informed Search New

    36/71

    Recursive best-first search

    Keeps track of the f-value of the best-alternative path available. If current f-values exceeds this alternative f-value

    than backtrack to alternative path.

    Upon backtracking change f-value to best f-value ofits children.

    Re-expansion of this result is thus still possible.

  • 8/2/2019 Informed Search New

    37/71

    Recursive best-first search, ex.

    Path until Rumnicu Vilcea is already expanded

    Above node; f-limit for every recursive call is shownon top.

    Below node: f(n)

    The path is followed until Pitesti which has a f-value

    worse than the f-limit.

  • 8/2/2019 Informed Search New

    38/71

    Recursive best-first search, ex.

    Unwind recursion and store best f-value for current best leafPitesti

    result, f[best] RBFS(problem, best, min(f_limit, alternative))

    bestis now Fagaras. Call RBFS for new best bestvalue is now 450

  • 8/2/2019 Informed Search New

    39/71

    Recursive best-first search, ex.

    Unwind recursion and store best f-value for current best leaf Fagarasresult, f[best] RBFS(problem, best, min(f_limit, alternative))

    bestis now Rimnicu Viclea (again). Call RBFS for new best

    Subtree is again expanded. Best alternativesubtree is now through Timisoara.

    Solution is found since because 447 > 417.

  • 8/2/2019 Informed Search New

    40/71

    RBFS evaluation

    RBFS is a bit more efficient than IDA* Still excessive node generation (mind changes)

    Like A*, optimal ifh(n)is admissible

    Space complexity is O(bd). IDA* retains only one single number (the current f-cost

    limit)

    Time complexity difficult to characterize Depends on accuracy if h(n) and how often best path

    changes.

    IDA* and RBFS suffer from too l i t t le memory.

  • 8/2/2019 Informed Search New

    41/71

    (simplified) memory-bounded A*

    Use all available memory. I.e. expand best leafs until available memory is full

    When full, SMA* drops worst leaf node (highest f-value)

    Like RFBS backup forgotten node to its parent

    What if all leafs have the same f-value? Same node could be selected for expansion and deletion.

    SMA* solves this by expanding newestbest leaf and

    deleting oldestworst leaf.

    SMA* is complete if solution is reachable, optimal ifoptimal solution is reachable.

  • 8/2/2019 Informed Search New

    42/71

    Heuristic functions

    E.g for the 8-puzzle Avg. solution cost is about 22 steps (branching factor +/- 3)

    Exhaustive search to depth 22: 3.1 x 1010 states.

    The corresponding states for 15 puzzle problem is 1013

    A good heuristic function can reduce the search process.

  • 8/2/2019 Informed Search New

    43/71

    Heuristic Functioneuristic FunctionFunction h(N) that estimates the costof the cheapest path from node N to

    goal node.Example: 8-puzzle

    1 2 3

    4 5 67 8

    123

    4

    5

    67

    8

    N goal

    h1(N) = number of misplaced tiles

    = 6

  • 8/2/2019 Informed Search New

    44/71

    Heuristic Functioneuristic FunctionFunction h(N) that estimate the cost ofthe cheapest path from node N to goal

    node.Example: 8-puzzle

    1 2 3

    4 5 67 8

    123

    4

    5

    67

    8

    N goal

    h2(N) = sum of the distances of

    every tile to its goal position= 2 + 3 + 0 + 1 + 3 + 0 + 3 + 1= 13

    City block distance or Manhattan distance

  • 8/2/2019 Informed Search New

    45/71

    8-Puzzle-Puzzle

    4

    5

    5

    3

    3

    4

    3 4

    4

    2 1

    2

    0

    3

    4

    3

    f(N) = h1(N) = number of misplaced tiles

  • 8/2/2019 Informed Search New

    46/71

    8-Puzzle-Puzzle

    0+4

    1+5

    1+5

    1+3

    3+3

    3+4

    3+4

    3+2 4+1

    5+2

    5+0

    2+3

    2+4

    2+3

    f(N) = g(N) + h(N)with h1(N) = number of misplaced tiles

  • 8/2/2019 Informed Search New

    47/71

    8-Puzzle-Puzzle

    5

    6

    6

    4

    4

    2 1

    2

    0

    5

    5

    3

    f(N) = h2(N) = distances of tiles to goal

  • 8/2/2019 Informed Search New

    48/71

    8-Puzzle-Puzzle

    0+4

    EXERCISE: f(N) = g(N) + h2(N)

    with h2(N) = distances of tiles to goal

  • 8/2/2019 Informed Search New

    49/71

    Heuristic quality

    Effective branching factor b* Is the branching factor that a uniform tree of depth d

    would have in order to contain N+1 nodes.

    Measure is fairly constant for sufficiently hardproblems. Can thus provide a good guide to the heuristics overall

    usefulness. A good value of b* is 1.

    N+1=1+b*+(b*)2

    +...+(b*)d

  • 8/2/2019 Informed Search New

    50/71

    Heuristic quality and dominance

    1200 random problems with solution lengths from 2 to 24.

    Ifh2(n) >= h1(n)for all n(both admissible)

    then h2dominates h1 and is better for search

  • 8/2/2019 Informed Search New

    51/71

    Inventing admissible heuristics

    Admissible heuristics can be derived from the exactsolution cost of a relaxed version of the problem: Relaxed 8-puzzle for h1 : a tile can move anywhere

    As a result, h1(n)gives the shortest solution

    Relaxed 8-puzzle for h2 : a tile can move to any adjacent

    square.

    As a result, h2(n)gives the shortest solution.

    The optimal solution cost of a relaxed problem is nogreater than the optimal solution cost of the realproblem.

  • 8/2/2019 Informed Search New

    52/71

    Another approach Local Search

    Previously: systematic exploration of search space. Path to goal is solution to problem

    YET, for some problems path is irrelevant. E.g 8-queens

    Different algorithms can be used:Local Search Hill-climbing or Gradient descent Simulated Annealing Genetic Algorithms, others

    Also applicable to optimization problems

    systematic search doesnt work however, can start with a suboptimal solution and improve it

    Local Search algorithms operate using a single currentstate and generally move only to the neighbor of thatstate.

  • 8/2/2019 Informed Search New

    53/71

    Local search and optimization

    Local search= use single current state and move to neighboringstates.

    Advantages: Use very little memory

    Find often reasonable solutions in large or infinite state spaces.Are also useful for pure optimization problems. The goal is to find best state according to some objective

    function.

  • 8/2/2019 Informed Search New

    54/71

    Local search and optimization

    State Space Landscape

  • 8/2/2019 Informed Search New

    55/71

    Hill-climbing search

    is a loop that continuously moves in the direction of increasingvalue It terminates when a peak is reached.

    Hill climbing does not look ahead of the immediate neighbors of

    the current state.Hill-climbing chooses randomly among the set of bestsuccessors, if there is more than one.

    Hill-climbing a.k.a. greedy local search

    Some problem spaces are great for hill climbing and others are

    terrible.

  • 8/2/2019 Informed Search New

    56/71

    Hill-climbing search

    function HILL-CLIMBING(problem) return a state that is a local maximuminput:problem, a problemlocal variables: current, a node.

    neighbor, a node.currentMAKE-NODE(INITIAL-STATE[problem])

    loop doneighbora highest valued successor ofcurrent

    if VALUE[neighbor] VALUE[current] then return STATE[current]currentneighbor

    8-queens state with heuristic

  • 8/2/2019 Informed Search New

    57/71

    8 queens state with heuristiccost estimate h=17

    18 12 14 13 13 12 14 1414 16 13 15 12 14 12 16

    14 12 18 13 15 12 14 14

    15 14 14 D 13 16 13 16A 14 17 15 E 14 16 1617 B 16 18 15 F 15 H18 14 C 15 15 14 G 1614 14 13 17 12 14 12 18

    h=17A B GHAC AEBC BFDE CGDF BD

    DG CEEF BHEGFG

    F

    H

  • 8/2/2019 Informed Search New

    58/71

    With h=1

    AB

    C

    D

    E

    F

    GH

  • 8/2/2019 Informed Search New

    59/71

    Hill climbing example2 8 3

    1 6 47 5

    2 8 31 4

    7 6 5

    2 3

    1 8 4

    7 6 5

    1 38 4

    7 6 5

    2

    3

    1 8 4

    7 6 5

    2

    1 3

    8 4

    7 6 5

    2

    start goal

    -5

    h = -3

    h = -3

    h = -2

    h = -1

    h = 0h = -4

    -5

    -4

    -4-3

    -2

    f(n) = -(number of tiles out of place)

  • 8/2/2019 Informed Search New

    60/71

    Example of a local maximum

    1 2 5

    7 4

    8 6 3

    1 2 5

    7 4

    8 6 3

    1 2 5

    7 4

    8 6 3

    1 2 5

    7 4

    8 6 3

    1 2 5

    7 4

    8 6 3

    -3

    -4

    -4

    -4

    0

    start goal

  • 8/2/2019 Informed Search New

    61/71

    61

    An 8-Puzzle Problem Solved by the Hill Climbing Method

  • 8/2/2019 Informed Search New

    62/71

    Drawbacks of hill climbingProblems: Local maxima: is a peak that is higher than each of

    its neighboring states but lower than the globalmaximum.

    Plateaus: the space has a broad flat region thatgives the search algorithm no direction. It can be aflat local maxima or a shoulder (random walk)

    Ridges: flat like a plateau, but with drop offs to thesides; steps to the North, East, South and West maygo down, but a combination of two steps (e.g. N, W)may go up.

    Remedy: Introduce randomness

  • 8/2/2019 Informed Search New

    63/71

    Hill-climbing variations

    Stochastic hill-climbing Random selection among the uphill moves. The selection probability can vary with the steepness of

    the uphill move.

    First-choice hill-climbing Stochastic hill climbing by generating successors

    randomly until a better one is found.

    Random-restart hill-climbing Tries to avoid getting stuck in local maxima. If at first you dont succeed, try, try again

  • 8/2/2019 Informed Search New

    64/71

    Simulated annealing

    Escape local maxima by allowing bad moves. Idea: but gradually decrease their size and frequency.

    Origin; metallurgical annealing

    Bouncing ball analogy: Shaking hard (= high temperature).

    Shaking less (= lower the temperature).

    If T decreases slowly enough, best state is reached.

    Applied for VLSI layout, airline scheduling, etc.

  • 8/2/2019 Informed Search New

    65/71

    Simulated annealing

    function SIMULATED-ANNEALING( problem, schedule) return a solution stateinput:problem, a problem

    schedule, a mapping from time to temperature

    local variables: current, a node.next, a node.T, a temperature controlling the probability of downward

    steps

    currentMAKE-NODE(INITIAL-STATE[problem])

    for t 1 to doTschedule[t]

    ifT = 0then returncurrentnexta randomly selected successor ofcurrent

    E VALUE[next] - VALUE[current]

    ifE > 0 then currentnextelsecurrentnextonly with probability eE /T

  • 8/2/2019 Informed Search New

    66/71

    Simulated Annealing

    applet

    Successful application: circuit routing,traveling sales person (TSP)

    http://appsrv.cse.cuhk.edu.hk/~csc6200/y99/applet/SA/annealing.htmlhttp://appsrv.cse.cuhk.edu.hk/~csc6200/y99/applet/SA/annealing.html
  • 8/2/2019 Informed Search New

    67/71

    Local beam search

    Keep track ofkstates instead of one Initially: krandom states

    Next: determine all successors ofkstates

    If any of successors is goal finished

    Else select kbest from successors and repeat.

    Major difference with random-restart search Information is shared among ksearch threads.

    Can suffer from lack of diversity.

    Stochastic variant: choose k successors at proportionally tostate success.

  • 8/2/2019 Informed Search New

    68/71

    Genetic algorithmSuccessor states are generated by combining two parent states.GAs begin with a set of k randomly generated statespopulation

    Each state is rated by the evaluation function or the fitnessfunction.

    Various techniques may be used for the production of nextgeneration of states: Selection

    Crossover

    Mutation

  • 8/2/2019 Informed Search New

    69/71

  • 8/2/2019 Informed Search New

    70/71

    When to Use Search Techniques?hen to Use Search Techniques?The search space is small, and There is no other available techniques, or

    It is not worth the effort to develop a moreefficient technique

    The search space is large, and

    There is no other available techniques, and There exist good heuristics

  • 8/2/2019 Informed Search New

    71/71

    Summary: Informed Searchummary: Informed SearchHeuristics

    Best-first Search Algorithms

    Greedy SearchA*Admissible heuristics

    Constructing Heuristic functionsLocal Search Algorithms