I am reading the book Artificial Intelligence: A Modern Approach. I came across this sentence describing the time complexity of uniform cost search:
Uniform-cost search is guided by path costs rather than depths, so its
complexity is not easily characterized in terms of b and d. Instead,
let C be the cost of the optimal solution, and assume that every
action costs at least ε. Then the algorithm’s worst-case time and
space complexity is O(b^(1+C/ε)), which can be much greater than b^d.
As to my understanding, C is the cost of the optimal solution, and every action costs at least ε, so that C/ε would be the number of steps taken to the destination. But I don’t know how the complexity is derived.
2
Answers
If the branching factor is b, every time you expand out a node, you will encounter k more nodes. Therefore, there are
So let’s suppose that the search stops after you reach level k. When this happens, the total number of nodes you’ll have visited will be
That equality follows from the sum of a geometric series. It happens to be the case that bk+1 / (b – 1) = O(bk), so if your goal node is in layer k, then you have to expand out O(bk) total nodes to find the one you want.
If C is your destination cost and each step gets you ε closer to the goal, the number of steps you need to take is given by C / ε + 1. The reason for the +1 is that you start at distance 0 and end at C / ε, so you take steps at distances
And there are 1 + C / ε total steps here. Therefore, there are 1 + C / ε layers, and so the total number of states you need to expand is O(b(1 + C / ε)).
Hope this helps!
templatetypedef’s answer is somewhat incorrect. The +1 has nothing to do with the fact that the starting depth is 0. If every step cost is at least ε > 0, and the cost of optimal solution is C, then the maximum depth of the optimal solution occurs at floor(C / ε). But the worst case time/space complexity is in fact O(b(1+floor(C/ε)). The +1 arises because in UCS, we only check whether a node is a goal when we select it for expansion, and not when we generate it (this is to ensure optimality). So in the worst case, we could potentially generate the entire level of nodes that comes after the goal node’s residing level (this explains the +1). In comparison, BFS applies the goal test when nodes are generated, so there is no corresponding +1 factor. This is a very important point that he missed.