Parametrized search

Recently I wrote about some simple AI stuff that involved different kinds of search routines. There was breadth first, depth first, best first and a* search. All of them share some common features that allowed me to write just a single routine and then tune it according to specifc search strategy I wanted to use. This blog post will show how that was done. While pretty basic concept, I found it interesting how different kinds of search strategies actually shared quite a bit between each other.

The interface I wanted to have is really simple:

(fn [state] 
   "solve state"
   ...)

solver is just a function that accepts single parameter describing a state that needs to be solved. All configuration has been done prior this point in a factory function that created our solver in the first place.

Breadth first and depth first searches are almost identical. They differ just on regards how the next node to be solved is chosen. Breadth first takes its time and proceeds through the search tree one layer at a time. It should find the optimal solution, although it might take a while to find it. Depth first search on the other hand picks one branch and goes as deep as possible before backtracking. It usually will find solution faster than breadth first, but it often isn’t optimal.

Both of these search strategies need same parameters: goal?, operators and identical?. goal? recognizes when problem has been solved, operators give list of possible operations for given state and identical? is used to prune loops from search tree.

Best first solver is just a little bit more complicated case. It requires all the same parameters as the previous ones and then one more distance. This parameter will calculate an estimate how far any given state is from a goal. Thus, if it’s not possible to estimate the distance, this search strategy can’t be used. Search strategy will always continue the branch that so far is closest to the goal. It will not re-evaluate the choice, unless the followed branch terminates before goal state has been reached. In a sense, it always picks the node that is closest to the goal, without any regard on the distance it had to travel to reach that node.

a* is the last and most complex of the searches stilpo supports. It is similar to best first solver as it estimates distance to goal. But in addition to that, a* also keeps track of how long any of the branches has been so far. Therefore it’s able to follow the branch that looks like it would have the shortest path to the goal.

Search routine through the tree boils down to very simple steps:

  1. check if we still have states left to check in queue/stack/big ball of mud
  2. pick one based on some semi-intelligent choice
  3. check if goal has been reached
  4. figure out what states are reachable from this state
  5. prune loops that might have formed
  6. store new states into queue/stack/big ball of mud
  7. repeat

If we’re storing states into a list, difference between breadth first and depth first search is only about from which end to take the next state to explore. If we’re doing best first or a* search, we should select state based on some ranking that is stored alongside the state. While it’s possible to use simple list in this case too, it’s considerably faster to use for example a priority queue. Luckilly I didn’t have to code one from scratch, but could just use the one that comes with Python.

So I ended up with a generic search routine factory:

(defn solver [next-path goal? operators identical? add-queue
              &optional distance distance-between]
  "create classical best first solver"
(fn [state]
  ...))

It creates a new search routine based on the parameters given to it. Full code is available in github.

Now constructing a breadth first solver can be done with a specific factory function:

(defn breadth-first-solver [goal? operators identical?]
  "create classical breadth first solver"
  (solver (fn [queue] (.pop queue 0))
          goal? operators identical?
          (fn [queue coll] (.extend queue coll))))

Depth first is done in almost identical way:

(defn depth-first-solver [goal? operators identical?]
  "create classical depth first solver"
  (solver (fn [queue] (.pop queue))
          goal? operators identical?
          (fn [queue coll] (.extend queue coll))))

Notice how they’re almost identical. Only difference is that breadth-first-solver creates a solver that pops first element of the list when selecting next state to examine, while depth-first-solver pops the last element of the list. This tiny change alone is enough to change search strategy.

Niftiest thing in my opinion is that if somebody comes up with new search strategy that still fits to that general pattern, it can be created without changing the factory function. Just supply correct parameters and start solving with your new quantum-solver.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s