Wednesday, 17 February 2016

graph theory - Why do we use priority queues when implementing Dijkstra's Algorithm?

Consider the running time for adding a new element to a sorted list, keeping the list sorted. If the list is an array, you can find the insertion point in $O(log n)$ steps, where $n$ is the current size of the list. But then you have to make room for the new element by shifting all the elements behind it one step back, and that takes $n/2$ steps on the average. Or you could use a linked list, but then binary search is not available, and it takes $n/2$ steps (on the average) to find the insertion point (and $O(1)$ to do the actual insertion). For a properly implemented priority queue, insertion is $O(log n)$, and so is fixing up the queue after removing the smallest member.

No comments:

Post a Comment