Previous Next 
AllPairs Shortest PathsIn this section, we consider two classes that solve the allpairs shortestpaths problem. The algorithms that we implement directly generalize two basic algorithms that we considered in for the transitiveclosure problem. The first method is to run Dijkstra's algorithm from each vertex to get the shortest paths from that vertex to each of the others. If we implement the priority queue with a heap, the worstcase running time for this approach is proportional to VE lg V, and we can improve this bound to VE for many types of networks by using a dary heap. The second method, which allows us to solve the problem directly in time proportional to V^{3}, is an extension of Warshall's algorithm that is known as Floyd's algorithm. Both of these classes implement an abstract–shortestpaths ADT interface for finding shortest distances and paths. This interface, which is shown in Program 21.2, is a generalization to weighted digraphs of the abstract–transitiveclosure interface for connectivity queries in digraphs that we studied in . In both class implementations, the constructor solves the allpairs shortestpaths problem and saves the result in private data fields to support query methods that return the shortestpath length from one given vertex to another and either the first or last edge on the path. Implementing such an ADT is a primary reason to use allpairs shortestpaths algorithms in practice. Program 21.3 is a sample client program that uses the allpairs shortestpaths ADT interface to find the weighted diameter of a network. It checks all pairs of vertices to find the one for which the shortestpath length is longest; then, it traverses the path, edge by edge. Screenshot shows the path computed by this program for our Euclidean network example. Screenshot Diameter of a networkThe largest entry in a network's allshortestpaths matrix is the diameter of the network: the length of the longest of the shortest paths, depicted here for our sample Euclidean network. The goal of the algorithms in this section is to support constanttime implementations of the query methods. Typically, we expect to have a huge number of such requests, so we are willing to invest substantial resources in private data fields and preprocessing in the constructor to be able to answer the queries quickly. Both of the algorithms that we consider use space proportional to V^{2} for the private data fields. The primary disadvantage of this general approach is that, for a huge network, we may not have so much space available (or we might not be able to afford the requisite preprocessing time). In principle, our interface provides us with the latitude to trade off preprocessing time and space for query time. If we expect only a few queries, we can do no preprocessing and simply run a singlesource algorithm for each query, but intermediate situations require more advanced algorithms (see Exercises 21.48 through 21.50). This problem generalizes one that challenged us for much of : the problem of supporting fast reachability queries in limited space. The first allpairs shortestpaths ADT method implementation that we consider solves the problem by using Dijkstra's algorithm to solve the singlesource problem for each vertex. In Java, we can express the method directly, as shown in Program 21.4: We build an array of GraphSPT objects, one to solve the singlesource problem for each vertex. This method generalizes the BFSbased method for unweighted undirected graphs that we considered in . It is also similar to our use of a DFS that starts at each vertex to compute the transitive closure of unweighted digraphs, in Program 19.4.
Property 21.7 With Dijkstra's algorithm, we can find all shortest paths in a network that has nonnegative weights in time proportional to VE log_{d} V, where d = 2 if E < 2 V, and d = E/V otherwise. Proof: Immediate from Property 21.6. As are our bounds for the singlesource shortestpaths and the MST problems, this bound is conservative; a running time of VE is likely for typical graphs. To compare this implementation with others, it is useful to study the matrices implicit in the arrayofarrays structure of the private data fields. The wt arrays form precisely the distances matrix that we considered in : The entry in row s and column t is the length of the shortest path from s to t. As illustrated in Figures 21.8 and 21.9, the spt arrays form the transpose of the paths matrix: The entry in row s and column t is the last entry on the shortest path from s to t. For dense graphs, we could use an adjacencymatrix representation and avoid computing the reverse graph by implicitly transposing the matrix (interchanging the row and column indices), as in Program 19.7. Developing an implementation along these lines is an interesting coding exercise and leads to a compact implementation (see Exercise 21.43); however, a different approach, which we consider next, admits an even more compact implementation.
The method of choice for solving the allpairs shortestpaths problem in dense graphs, which was developed by R. Floyd, is precisely the same as Warshall's method, except that instead of using the logical or operation to keep track of the existence of paths, it checks distances for each edge to determine whether that edge is part of a new shorter path. Indeed, as we have noted, Floyd's and Warshall's algorithms are identical in the proper abstract setting (see s 19.3 and 21.1). Program 21.5 is an allpairs shortestpaths ADT implementation that uses Floyd's algorithm. It explictly uses the matrices from as private data fields: a VbyV array of arrays d for the distances matrix, and another VbyV array of arrays p for the paths table. For every pair of vertices s and t, the constructor sets d[s][t] to the shortestpath length from s to t (to be returned by the dist method) and p[s][t] to the index of the next vertex on the shortest path from s to t (to be returned by the path method). The implementation is based upon the path relaxation operation that we considered in .
Property 21.8 With Floyd's algorithm, we can find all shortest paths in a network in time proportional to V^{3}. Proof: The running time is immediate from inspection of the code. We prove that the algorithm is correct by induction in precisely the same way as we did for Warshall's algorithm. The ith iteration of the loop computes a shortest path from s to t in the network that does not include any vertices with indices greater than i (except possibly the endpoints s and t). Assuming this fact to be true for the ith iteration of the loop, we prove it to be true for the (i+1)st iteration of the loop. A shortest path from s to t that does not include any vertices with indices greater than i+1 is either (i) a path from s to t that does not include any vertices with indices greater than i, of length d[s][t], that was found on a previous iteration of the loop, by the inductive hypothesis; or (ii) comprising a path from s to i and a path from i to t, neither of which includes any vertices with indices greater than i, in which case the inner loop sets d[s][t]. Screenshot is a detailed trace of Floyd's algorithm on our sample network. If we convert each blank entry to 0 (to indicate the absence of an edge) and convert each nonblank entry to 1 (to indicate the presence of an edge), then these matrices describe the operation of Warshall's algorithm in precisely the same manner as we did in Screenshot. For Floyd's algorithm, the nonblank entries indicate more than the existence of a path; they give information about the shortest known path. An entry in the distance matrix has the length of the shortest known path connecting the vertices corresponding to the given row and column; the corresponding entry in the paths matrix gives the next vertex on that path. As the matrices become filled with nonblank entries, running Warshall's algorithm amounts to just doublechecking that new paths connect pairs of vertices already known to be connected by a path; in contrast, Floyd's algorithm must compare (and update if necessary) each new path to see whether the new path leads to shorter paths. Screenshot Floyd's algorithmThis sequence shows the construction of the allpairs shortestpaths matrices with Floyd's algorithm. For i from 0 to 5 (top to bottom), we consider, for all s and t, all of the paths from s to t having no intermediate vertices greater than i (the shaded vertices). Initially, the only such paths are the network's edges, so the distances matrix (center) is the graph's adjacency matrix and the paths matrix (right) is set with p[s][t] = t for each edge st. For vertex 0 (top), the algorithm finds that 301 is shorter than the sentinel value that is present because there is no edge 31 and updates the matrices accordingly. It does not do so for paths such as 305, which is not shorter than the known path 35. Next the algorithm considers paths through 0 and 1 (second from top) and finds the new shorter paths 012, 014, 3012, 3014, and 512. The third row from the top shows the updates corresponding to shorter paths through 0, 1, and 2 and so forth. Black numbers overstriking gray ones in the matrices indicate situations where the algorithm finds a shorter path than one it found earlier. For example, .91 overstrikes 1.37 in row 3 and column 2 in the bottom diagram because the algorithm discovered that 3542 is shorter than 3012. Comparing the worstcase bounds on the running times of Dijkstra's and Floyd's algorithms, we can draw the same conclusion for these allpairs shortestpaths algorithms as we did for the corresponding transitiveclosure algorithms in . Running Dijkstra's algorithm on each vertex is clearly the method of choice for sparse networks, because the running time is close to VE. As density increases, Floyd's algorithm—which always takes time proportional to V^{3}—becomes competitive (see Exercise 21.67); it is widely used because it is so simple to implement. A more fundamental distinction between the algorithms, which we examine in detail in , is that Floyd's algorithm is effective in even those networks that have negative weights (provided that there are no negative cycles). As we noted in , Dijkstra's method does not necessarily find shortest paths in such graphs. The classical solutions to the allpairs shortestpaths problem that we have described presume that we have space available to hold the distances and paths matrices. Huge sparse graphs, where we cannot afford to have any VbyV matrices, present another set of challenging and interesting problems. As we saw in , it is an open problem to reduce this space cost to be proportional to V while still supporting constanttime shortestpathlength queries. We found the analogous problem to be difficult even for the simpler reachability problem (where we are satisfied with learning in constant time whether there is any path connecting a given pair of vertices), so we cannot expect a simple solution for the allpairs shortestpaths problem. Indeed, the number of different shortest path lengths is, in general, proportional to V^{2} even for sparse graphs. That value, in some sense, measures the amount of information that we need to process and perhaps indicates that when we do have restrictions on space, we must expect to spend more time on each query (see Exercises 21.48 through 21.50). Exercises

Previous Next 