Previous Next 
Dijkstra's AlgorithmIn , we discussed Prim's algorithm for finding the minimum spanning tree (MST) of a weighted undirected graph: We build it one edge at a time, always taking next the shortest edge that connects a vertex on the MST to a vertex not yet on the MST. We can use a nearly identical scheme to compute an SPT. We begin by putting the source on the SPT; then, we build the SPT one edge at a time, always taking next the edge that gives a shortest path from the source to a vertex not on the SPT. In other words, we add vertices to the SPT in order of their distance (through the SPT) to the start vertex. This method is known as Dijkstra's algorithm. As usual, we need to make a distinction between the algorithm at the level of abstraction in this informal description and various concrete implementations (such as Program 21.1) that differ primarily in graph representation and priorityqueue implementations, even though such a distinction is not always made in the literature. We shall consider other implementations and discuss their relationships with Program 21.1 after establishing that Dijkstra's algorithm correctly performs the singlesource shortestpaths computation. Property 21.2 Dijkstra's algorithm solves the singlesource shortestpaths problem in networks that have nonnegative weights. Proof: Given a source vertex s, we have to establish that the tree path from the root s to each vertex x in the tree computed by Dijkstra's algorithm corresponds to a shortest path in the graph from s to x. This fact follows by induction. Assuming that the subtree so far computed has the property, we need only to prove that adding a new vertex x adds a shortest path to that vertex. But all other paths to x must begin with a tree path followed by an edge to a vertex not on the tree. By construction, all such paths are longer than the one from s to x that is under consideration. The same argument shows that Dijkstra's algorithm solves the source–sink shortestpaths problem, if we start at the source and stop when the sink comes off the priority queue. The proof breaks down if the edge weights could be negative, because it assumes that a path's length does not decrease when we add more edges to the path. In a network with negative edge weights, this assumption is not valid because any edge that we encounter might lead to some tree vertex and might have a sufficiently large negative weight to give a path to that vertex shorter than the tree path. We consider this defect in (see Screenshot). Screenshot shows the evolution of an SPT for a sample graph when computed with Dijkstra's algorithm; Screenshot shows an oriented drawing of a larger SPT tree. Although Dijkstra's algorithm differs from Prim's MST algorithm in only the choice of priority, SPT trees are different in character from MSTs. They are rooted at the start vertex and all edges are directed away from the root, whereas MSTs are unrooted and undirected. We sometimes represent MSTs as directed, rooted trees when we use Prim's algorithm, but such structures are still different in character from SPTs (compare the oriented drawing in Screenshot with the drawing in Screenshot). Indeed, the nature of the SPT somewhat depends on the choice of start vertex as well, as depicted in Screenshot. Screenshot Dijkstra's algorithmThis sequence depicts the construction of a shortestpaths spanning tree rooted at vertex 0 by Dijkstra's algorithm for a sample network. Thick black edges in the network diagrams are tree edges, and thick gray edges are fringe edges. Oriented drawings of the tree as it grows are shown in the center, and a list of fringe edges is given on the right. The first step is to add 0 to the tree and the edges leaving it, 01 and 05, to the fringe (top). Second, we move the shortest of those edges, 05, from the fringe to the tree and check the edges leaving it: The edge 54 is added to the fringe and the edge 51 is discarded because it is not part of a shorter path from 0 to 1 than the known path 01 (second from top). The priority of 54 on the fringe is the length of the path from 0 that it represents, 054. Third, we move 01 from the fringe to the tree, add 12 to the fringe, and discard 14 (third from top). Fourth, we move 54 from the fringe to the tree, add 43 to the fringe, and replace 12 with 42 because 0542 is a shorter path than 012 (fourth from top). We keep at most one edge to any vertex on the fringe, choosing the one on the shortest path from 0. We complete the computation by moving 42 and then 43 from the fringe to the tree (bottom). Screenshot Shortestpaths spanning treeThis figure illustrates the progress of Dijkstra's algorithm in solving the singlesource shortestpaths problem in a random Euclidean nearneighbor digraph (with directed edges in both directions corresponding to each line drawn), in the same style as Figures 18.13, 18.24, and 20.9. The search tree is similar in character to BFS because vertices tend to be connected to one another by short paths, but it is slightly deeper and less broad because distances lead to slightly longer paths than path lengths. Screenshot SPT examplesThese three examples show growing SPTs for three different source locations: left edge (top), upper left corner (center), and center (bottom). Dijkstra's original implementation, which is suitable for dense graphs, is precisely like Prim's MST algorithm. Specifically, we simply change the assignment of the priority P in Program 20.6 from P = e.wt() (the edge weight) to P = wt[v] + e.wt() (the distance from the source to the edge's destination). This change gives the classical implementation of Dijkstra's algorithm: We grow an SPT one edge at a time, each time updating the distance to the tree of all vertices adjacent to its destination while at the same time checking all the nontree vertices to find an edge to move to the tree whose destination vertex is a nontree vertex of minimal distance from the source. Property 21.3 With Dijkstra's algorithm, we can find any SPT in a dense network in linear time. Proof: As for Prim's MST algorithm, it is immediately clear, from inspection of the code of Program 20.6, that the nested loops mean that the running time is proportional to V^{2}, which is linear for dense graphs. For sparse graphs, we can do better by using a linkedlist representation and a priority queue. Implementing this approach simply amounts to viewing Dijkstra's algorithm as a generalized graphsearching method that differs from depthfirst search (DFS), from breadthfirst search (BFS), and from Prim's MST algorithm in only the rule used to add edges to the tree. As in , we keep edges that connect tree vertices to nontree vertices on a generalized queue called the fringe, use a priority queue to implement the generalized queue, and provide for updating priorities so as to encompass DFS, BFS, and Prim's algorithm in a single implementation (see ). This priorityfirst search (PFS) scheme also encompasses Dijkstra's algorithm. That is, changing the assignment of P in Program 20.7 to P = wt[v] + e.wt() (the distance from the source to the edge's destination) gives an implementation of Dijkstra's algorithm that is suitable for sparse graphs. Program 21.1 is an alternative PFS implementation for sparse graphs that is slightly simpler than Program 20.7 and that directly matches the informal description of Dijkstra's algorithm given at the beginning of this section. It differs from Program 20.7 in that it initializes the priority queue with all the vertices in the network and maintains the queue with the aid of a sentinel value for those vertices that are neither on the tree nor on the fringe (unseen vertices with sentinel values); in contrast, Program 20.7 keeps on the priority queue only those vertices that are reachable by a single edge from the tree. Keeping all the vertices on the queue simplifies the code but can incur a small performance penalty for some graphs (see Exercise 21.31). The general results that we considered concerning the performance of priorityfirst search (PFS) in give us specific information about the performance of these implementations of Dijkstra's algorithm for sparse graphs (Program 21.1 and Program 20.7, suitably modified). For reference, we restate those results in the present context. Since the proofs do not depend on the priority function, they apply without modification. They are worstcase results that apply to both programs, although Program 20.7 may be more efficient for many classes of graphs because it maintains a smaller fringe. Property 21.4 For all networks and all priority functions, we can compute a spanning tree with PFS in time proportional to the time required for V insert, V delete the minimum, and E decrease key operations in a priority queue of size at most V. Proof: This fact is immediate from the priorityqueue–based implementations in Program 20.7 or Program 21.1. It represents a conservative upper bound because the size of the priority queue is often much smaller than V, particularly for Program 20.7. Property 21.5 With a PFS implementation of Dijkstra's algorithm that uses a heap for the priorityqueue implementation, we can compute any SPT in time proportional to E lg V.
Proof: This result is a direct consequence of Property 21.4. Property 21.6 Given a graph with V vertices and E edges, let d denote the density E/V. If d < 2, then the running time of Dijkstra's algorithm is proportional to V lg V. Otherwise, we can improve the worstcase running time by a factor of lg(E/V), to O(E lg_{d} V) (which is linear if E is at least V^{1+}) by using a E/Vary heap for the priority queue. Proof: This result directly mirrors Property 20.12 and the multiwayheap priorityqueue implementation discussed directly thereafter. Table 21.1. Priorityfirst search algorithms
Of these, the first is primarily of pedagogical value, the second is the most refined of the three, and the third is perhaps the simplest. This framework already describes 16 different implementations of classical graphsearch algorithms—when we factor in different priorityqueue implementations, the possibilities multiply further. This proliferation of networks, algorithms, and implementations underscores the utility of the general statements about performance in Properties 21.4 through 21.6, which are also summarized in Table 21.2. Table 21.2. Cost of implementations of Dijkstra's algorithm
Exercises

Previous Next 