Previous Next 
Reachability and Transitive ClosureTo develop efficient solutions to reachability problems in digraphs, we begin with the following fundamental definition. Definition 19.5 The transitive closure of a digraph is a digraph with the same vertices but with an edge from s to t in the transitive closure if and only if there is a directed path from s to t in the given digraph. In other words, a digraph's transitive closure has an edge from each vertex to all the vertices reachable from that vertex in the digraph. Clearly, the transitive closure embodies all the requisite information for solving reachability problems. Screenshot illustrates a small example. Screenshot Transitive closureThis digraph (top) has just eight directed edges, but its transitive closure (bottom) shows that there are directed paths connecting 19 of the 30 pairs of vertices. Structural properties of the digraph are reflected in the transitive closure. For example, rows 0, 1, and 2 in the adjacency matrix for the transitive closure are identical (as are columns 0, 1, and 2) because those vertices are on a directed cycle in the digraph. One appealing way to understand the transitive closure is based on adjacencymatrix digraph representations, and on the following basic computational problem: Boolean matrix multiplication A Boolean matrix is a matrix whose entries are all binary values, either false or true. Given two Boolean matrices A and B, compute a Boolean product matrix C, using the logical and and or operations instead of the arithmetic operations * and +, respectively. The textbook algorithm for computing the product of two VbyV matrices computes, for each s and t, the dot product of row s in the first matrix and row t in the second matrix, as follows: for (s = 0; s < V; s++) for (t = 0; t < V; t++) for (i = 0, C[s][t] = 0; i < V; i++) C[s][t] += A[s][i]*B[i][t]; In matrix notation, we write this operation simply as C = A * B. This operation is defined for matrices comprising any type of entry for which 0, +, and * are defined. In particular, if the matrix entries are either true or false and we interpret a+b to be the logical or operation and a*b to be the logical and operation, then we have Boolean matrix multiplication. In Java, we can use the following version: for (s = 0; s < V; s++) for (t = 0; t < V; t++) for (i = 0, C[s][t] = false; i < V; i++) if (A[s][i] && B[i][t]) C[s][t] = true; To compute C[s][t] in the product, we initialize it to false, then set it to true if we find some value i for which both A[s][i] and B[i][t] are both true. Running this computation is equivalent to setting C[s][t] to true if and only if the result of a bitwise logical and of row s in A with column t in B has a nonzero entry. Now suppose that A is the adjacency matrix of a digraph A and that we use the preceding code to compute C = A * A A^{2} (simply by changing the reference to B in the code into a reference to A). Reading the code in terms of the interpretation of the adjacencymatrix entries immediately tells us what it computes: For each pair of vertices s and t, we put an edge from s to t in C if and only if there is some vertex i for which there is both a path from s to i and a path from i to t in A. In other words, directed edges in A^{2} correspond precisely to directed paths of length 2 in A. If we include selfloops at every vertex in A, then A^{2} also has the edges of A; otherwise, it does not. This relationship between Boolean matrix multiplication and paths in digraphs is illustrated in Screenshot. It leads immediately to an elegant method for computing the transitive closure of any digraph. Screenshot Squaring an adjacency matrixIf we put 0s on the diagonal of a digraph's adjacency matrix, the square of the matrix represents a graph with an edge corresponding to each path of length 2 (top). If we put 1s on the diagonal, the square of the matrix represents a graph with an edge corresponding to each path of length 1 or 2 (bottom). Property 19.6 We can compute the transitive closure of a digraph by constructing the latter's adjacency matrix A, adding selfloops for every vertex, and computing A^{V}. Proof: Continuing the argument in the previous paragraph, A^{3} has an edge for every path of length less than or equal to 3 in the digraph, A^{4} has an edge for every path of length less than or equal to 4 in the digraph, and so forth. We do not need to consider paths of length greater than V because of the pigeonhole principle: Any such path must revisit some vertex (since there are only V of them) and therefore adds no information to the transitive closure because the same two vertices are connected by a directed path of length less than V (which we could obtain by removing the cycle to the revisited vertex). Screenshot shows the adjacencymatrix powers for a sample digraph converging to transitive closure. This method takes V matrix multiplications, each of which takes time proportional to V^{3}, for a grand total of V^{4}. We can actually compute the transitive closure for any digraph with just lg V Boolean matrixmultiplication operations: We compute A^{2}, A^{4}, A^{8}, ... until we reach an exponent greater than or equal to V. As shown in the proof of Property 19.6, A^{t} = A^{V} for any t > V; so the result of this computation, which requires time proportional to V^{3} lg V, is A^{V}—the transitive closure. Screenshot Adjacency matrix powers and directed pathsThis sequence shows the first, second, third, and fourth powers (right, top to bottom) of the adjacency matrix at the top right, which gives graphs with edges for each of the paths of lengths less than 1, 2, 3, and 4, respectively, (left, top to bottom) in the graph that the matrix represents. The bottom graph is the transitive closure for this example, since there are no paths of length greater than 4 that connect vertices not connected by shorter paths. Although the approach just described is appealing in its simplicity, an even simpler method is available. We can compute the transitive closure with just one operation of this kind, building up the transitive closure from the adjacency matrix in place, as follows: for (i = 0; i < V; i++) for (s = 0; s < V; s++) for (t = 0; t < V; t++) if (A[s][i] && A[i][t]) A[s][t] = true; This classical method, invented by S. Warshall in 1962, is the method of choice for computing the transitive closure of dense digraphs. The code is similar to the code that we might try to use to square a Boolean matrix in place: The difference (which is significant!) lies in the order of the for loops. Property 19.7 With Warshall's algorithm, we can compute the transitive closure of a digraph in time proportional to V^{3}. Proof: The running time is immediately evident from the structure of the code. We prove that it computes the transitive closure by induction on i. After the first iteration of the loop, the matrix has true in row s and column t if and only if the digraph has either the edge st or the path s0t. The second iteration checks all the paths between s and t that include 1 and perhaps 0, such as s1t, s10t, and s01t. We are led to the following inductive hypothesis: The ith iteration of the loop sets the bit in row s and column t in the matrix to true if and only if there is a directed path from s to t in the digraph that does not include any vertices with indices greater than i (except possibly the endpoints s and t). As just argued, the condition is true when i is 0, after the first iteration of the loop. Assuming that it is true for the ith iteration of the loop, there is a path from s to t that does not include any vertices with indices greater than i+1 if and only if (i) there is a path from s to t that does not include any vertices with indices greater than i, in which case A[s][t] was set on a previous iteration of the loop (by the inductive hypothesis); or (ii) there is a path from s to i+1 and a path from i+1 to t, neither of which includes any vertices with indices greater than i (except endpoints), in which case A[s][i+1] and A[i+1][t] were previously set to true (by hypothesis), so the inner loop sets A[s][t]. We can improve the performance of Warshall's algorithm with a simple transformation of the code: We move the test of A[s][i] out of the inner loop because its value does not change as t varies. This move allows us to avoid executing the t loop entirely when A[s][i] is zero. The savings that we achieve from this improvement depends on the digraph and is substantial for many digraphs (see Exercises 19.53 and 19.54). Program 19.3 implements this improvement and packages Warshall's method such that clients can preprocess a digraph (compute the transitive closure), then compute the answer to any reachability query in constant time. We are interested in pursuing more efficient solutions, particularly for sparse digraphs. We would like to reduce both the preprocessing time and the space because both make the use of Warshall's method prohibitively costly for huge sparse digraphs. In modern apps, abstract data types provide us with the ability to separate out the idea of an operation from any particular implementation so that we can focus on efficient implementations. For the transitive closure, this point of view leads to a recognition that we do not necessarily need to compute the entire matrix to provide clients with the transitiveclosure abstraction. One possibility might be that the transitive closure is a huge sparse matrix, so an adjacencylists representation is called for because we cannot store the matrix representation. Even when the transitive closure is dense, client programs might test only a tiny fraction of possible pairs of edges, so computing the whole matrix is wasteful. We use the term abstract transitive closure to refer to an ADT that provides clients with the ability to test reachability after preprocessing a digraph, like Program 19.3. In this context, we need to measure an algorithm not just by its cost to compute the transitive closure (preprocessing cost) but also by the space required and the query time achieved. That is, we rephrase Property 19.7 as follows: Property 19.8 We can support constanttime reachability testing (abstract transitive closure) for a digraph, using space proportional to V^{2} and time proportional to V^{3} for preprocessing. This property follows immediately from the basic performance characteristics of Warshall's algorithm. For most apps, our goal is not just to compute the transitive closure of a digraph quickly but also to support constant query time for the abstract transitive closure using far less space and far less preprocessing time than specified in Property 19.8. Can we find an implementation that will allow us to build clients that can afford to handle such digraphs? We return to this question in . There is an intimate relationship between the problem of computing the transitive closure of a digraph and a number of other fundamental computational problems, and that relationship can help us to understand this problem's difficulty. We conclude this section by considering two examples of such problems. First, we consider the relationship between the transitive closure and the allpairs shortestpaths problem (see ). For digraphs, the problem is to find, for each pair of vertices, a directed path with a minimal number of edges. Given a digraph, we initialize a VbyV integer matrix A by setting A[s][t] to true if there is an edge from s to t and to the sentinel value V if there is no such edge. Our goal is to set A[s][t] equal to the length of (the number of edges on) a shortest directed path from s to t, using the sentinel value V to indicate that there is no such path. The following code accomplishes this objective: for (i = 0; i < V; i++) for (s = 0; s < V; s++) for (t = 0; t < V; t++) if (A[s][i] + A[i][t] < A[s][t]) A[s][t] = A[s][i] + A[i][t]; This code differs from the version of Warshall's algorithm that we saw just before Property 19.7 in only the if statement in the inner loop. Indeed, in the proper abstract setting, the computations are precisely the same (see Exercises 19.55 and 19.56). Converting the proof of Property 19.7 into a direct proof that this method accomplishes the desired objective is straightforward. This method is a special case of Floyd's algorithm for finding shortest paths in weighted graphs (see ). The BFSbased solution for undirected graphs that we considered in also finds shortest paths in digraphs (appropriately modified). Shortest paths are the subject of , so we defer considering detailed performance comparisons until then. Second, as we have seen, the transitiveclosure problem is also closely related to the Boolean matrixmultiplication problem. The basic algorithms that we have seen for both problems require time proportional to V^{3}, using similar computational schema. Boolean matrix multiplication is known to be a difficult computational problem: Algorithms that are asymptotically faster than the straightforward method are known, but it is debatable whether the savings are sufficiently large to justify the effort of implementing any of them. This fact is significant in the present context because we could use a fast algorithm for Boolean matrix multiplication to develop a fast transitiveclosure algorithm (slower by just a factor of lg V) using the repeatedsquaring method illustrated in Screenshot. Conversely, we have a lower bound on the difficulty of computing the transitive closure: Property 19.9 We can use any transitiveclosure algorithm to compute the product of two Boolean matrices with at most a constantfactor difference in running time. Proof: Given two VbyV Boolean matrices A and B, we construct the following 3Vby3V matrix: Here, 0 denotes the VbyV matrix with all entries equal to 0, and I denotes the VbyV identity matrix with all entries equal to 0 except those on the diagonal, which are equal to 1. Now, we consider this matrix to be the adjacency matrix for a digraph and compute its transitive closure by repeated squaring. But we only need one step: The matrix on the righthand side of this equation is the transitive closure because further multiplications give back the same matrix. But this matrix has the VbyV product A * B in its upperright corner. Whatever algorithm we use to solve the transitiveclosure problem, we can use it to solve the Boolean matrixmultiplication problem at the same cost (to within a constant factor). The significance of this property depends on the conviction of experts that Boolean matrix multiplication is difficult: Mathematicians have been working for decades to try to learn precisely how difficult it is, and the question is unresolved; the best known results say that the running time should be proportional to about V^{2.5} (see reference section). Now, if we could find a lineartime (proportional to V^{2}) solution to the transitiveclosure problem, then we would have a lineartime solution to the Boolean matrixmultiplication problem as well. This relationship between problems is known as reduction: We say that the Boolean matrixmultiplication problem reduces to the transitiveclosure problem (see and Part 8). Indeed, the proof actually shows that Boolean matrix multiplication reduces to finding the paths of length 2 in a digraph. Despite a great deal of research by many people, no one has been able to find a lineartime Boolean matrixmultiplication algorithm, so we cannot present a simple lineartime transitiveclosure algorithm. On the other hand, no one has proved that no such algorithm exists, so we hold open that possibility for the future. In short, we take Property 19.9 to mean that, barring a research breakthrough, we cannot expect the worstcase running time of any transitiveclosure algorithm that we can concoct to be proportional to V^{2}. Despite this conclusion, we can develop fast algorithms for certain classes of digraphs. For example, we have already touched on a simple method for computing the transitive closure that is much faster than Warshall's algorithm for sparse digraphs. Property 19.10 With DFS, we can support constant query time for the abstract transitive closure of a digraph, with space proportional to V^{2} and time proportional to V (E + V) for preprocessing (computing the transitive closure). Proof: As we observed in the previous section, DFS gives us all the vertices reachable from the start vertex in time proportional to E, if we use the adjacencylists representation (see Property 19.5 and Screenshot). Therefore, if we run DFS V times, once with each vertex as the start vertex, then we can compute the set of vertices reachable from each vertex—the transitive closure—in time proportional to V(E + V). The same argument holds for any lineartime generalized search (see and Exercise 19.66).
Program 19.4 is an implementation of this searchbased transitiveclosure algorithm. This class implements the same interface as does Program 19.3. The result of running this program on the sample digraph in Screenshot is illustrated in the first tree in each forest in Screenshot. For sparse digraphs, this searchbased approach is the method of choice. For example, if E is proportional to V, then Program 19.4 computes the transitive closure in time proportional to V^{2}. How can it do so, given the reduction to Boolean matrix multiplication that we just considered? The answer is that this transitiveclosure algorithm does indeed give an optimal way to multiply certain types of Boolean matrices (those with O(V) nonzero entries). The lower bound tells us that we should not expect to find a transitiveclosure algorithm that runs in time proportional to V^{2} for all digraphs, but it does not preclude the possibility that we might find algorithms, like this one, that are faster for certain classes of digraphs. If such graphs are the ones that we need to process, the relationship between transitive closure and Boolean matrix multiplication may not be relevant to us. It is easy to extend the methods that we have described in this section to provide clients with the ability to find a specific path connecting two vertices by keeping track of the search tree, as described in . We consider specific ADT implementations of this sort in the context of the more general shortestpaths problems in . Table 19.1 shows empirical results comparing the elementary transitiveclosure algorithms described in this section. The adjacencylists implementation of the searchbased solution is by far the fastest method for sparse digraphs. The implementations all compute an adjacency matrix (of size V^{2}), so none of them are suitable for huge sparse digraphs. For sparse digraphs whose transitive closure is also sparse, we might use an adjacencylists implementation for the closure so that the size of the output is proportional to the number of edges in the transitive closure. This number certainly is a lower bound on the cost of computing the transitive closure, which we can achieve for certain types of digraphs using various algorithmic techniques (see Exercises 19.64 and 19.65). Despite this possibility, we generally view the objective of a transitiveclosure computation to be dense, so we can use an adjacencyarray representation and can easily answer reachability queries, and we regard transitiveclosure algorithms that compute the matrix in time proportional to V^{2} as being optimal since they take time proportional to the size of their output. Table 19.1. Empirical study of transitiveclosure algorithms
Exercises

Previous Next 