Reachability and Transitive Closure

To develop efficient solutions to reachability problems in digraphs, we begin with the following fundamental definition. Definition 19.5 The transitive closure of a digraph is a digraph with the same vertices but with an edge from s to t in the transitive closure if and only if there is a directed path from s to t in the given digraph. In other words, a digraph's transitive closure has an edge from each vertex to all the vertices reachable from that vertex in the digraph. Clearly, the transitive closure embodies all the requisite information for solving reachability problems. Screenshot illustrates a small example.

Screenshot Transitive closure

This digraph (top) has just eight directed edges, but its transitive closure (bottom) shows that there are directed paths connecting 19 of the 30 pairs of vertices. Structural properties of the digraph are reflected in the transitive closure. For example, rows 0, 1, and 2 in the adjacency matrix for the transitive closure are identical (as are columns 0, 1, and 2) because those vertices are on a directed cycle in the digraph.

Java graphics 19fig13.gif


One appealing way to understand the transitive closure is based on adjacency-matrix digraph representations, and on the following basic computational problem: Boolean matrix multiplication A Boolean matrix is a matrix whose entries are all binary values, either false or true. Given two Boolean matrices A and B, compute a Boolean product matrix C, using the logical and and or operations instead of the arithmetic operations * and +, respectively. The textbook algorithm for computing the product of two V-by-V matrices computes, for each s and t, the dot product of row s in the first matrix and row t in the second matrix, as follows:

for (s = 0; s < V; s++)
 for (t = 0; t < V; t++)
 for (i = 0, C[s][t] = 0; i < V; i++)
 C[s][t] += A[s][i]*B[i][t];


In matrix notation, we write this operation simply as C = A * B. This operation is defined for matrices comprising any type of entry for which 0, +, and * are defined. In particular, if the matrix entries are either true or false and we interpret a+b to be the logical or operation and a*b to be the logical and operation, then we have Boolean matrix multiplication. In Java, we can use the following version:

for (s = 0; s < V; s++)
 for (t = 0; t < V; t++)
 for (i = 0, C[s][t] = false; i < V; i++)
 if (A[s][i] && B[i][t]) C[s][t] = true;


To compute C[s][t] in the product, we initialize it to false, then set it to true if we find some value i for which both A[s][i] and B[i][t] are both true. Running this computation is equivalent to setting C[s][t] to true if and only if the result of a bitwise logical and of row s in A with column t in B has a nonzero entry. Now suppose that A is the adjacency matrix of a digraph A and that we use the preceding code to compute C = A * A Screenshot A2 (simply by changing the reference to B in the code into a reference to A). Reading the code in terms of the interpretation of the adjacency-matrix entries immediately tells us what it computes: For each pair of vertices s and t, we put an edge from s to t in C if and only if there is some vertex i for which there is both a path from s to i and a path from i to t in A. In other words, directed edges in A2 correspond precisely to directed paths of length 2 in A. If we include self-loops at every vertex in A, then A2 also has the edges of A; otherwise, it does not. This relationship between Boolean matrix multiplication and paths in digraphs is illustrated in Screenshot. It leads immediately to an elegant method for computing the transitive closure of any digraph.

Screenshot Squaring an adjacency matrix

If we put 0s on the diagonal of a digraph's adjacency matrix, the square of the matrix represents a graph with an edge corresponding to each path of length 2 (top). If we put 1s on the diagonal, the square of the matrix represents a graph with an edge corresponding to each path of length 1 or 2 (bottom).

Java graphics 19fig14.gif


Property 19.6 We can compute the transitive closure of a digraph by constructing the latter's adjacency matrix A, adding self-loops for every vertex, and computing AV. Proof: Continuing the argument in the previous paragraph, A3 has an edge for every path of length less than or equal to 3 in the digraph, A4 has an edge for every path of length less than or equal to 4 in the digraph, and so forth. We do not need to consider paths of length greater than V because of the pigeonhole principle: Any such path must revisit some vertex (since there are only V of them) and therefore adds no information to the transitive closure because the same two vertices are connected by a directed path of length less than V (which we could obtain by removing the cycle to the revisited vertex). Screenshot


Screenshot shows the adjacency-matrix powers for a sample digraph converging to transitive closure. This method takes V matrix multiplications, each of which takes time proportional to V3, for a grand total of V4. We can actually compute the transitive closure for any digraph with just Screenshotlg VScreenshot Boolean matrix-multiplication operations: We compute A2, A4, A8, ... until we reach an exponent greater than or equal to V. As shown in the proof of Property 19.6, At = AV for any t > V; so the result of this computation, which requires time proportional to V3 lg V, is AV—the transitive closure.

Screenshot Adjacency matrix powers and directed paths

This sequence shows the first, second, third, and fourth powers (right, top to bottom) of the adjacency matrix at the top right, which gives graphs with edges for each of the paths of lengths less than 1, 2, 3, and 4, respectively, (left, top to bottom) in the graph that the matrix represents. The bottom graph is the transitive closure for this example, since there are no paths of length greater than 4 that connect vertices not connected by shorter paths.

Java graphics 19fig15.gif


Although the approach just described is appealing in its simplicity, an even simpler method is available. We can compute the transitive closure with just one operation of this kind, building up the transitive closure from the adjacency matrix in place, as follows:

for (i = 0; i < V; i++)
 for (s = 0; s < V; s++)
 for (t = 0; t < V; t++)
 if (A[s][i] && A[i][t]) A[s][t] = true;


This classical method, invented by S. Warshall in 1962, is the method of choice for computing the transitive closure of dense digraphs. The code is similar to the code that we might try to use to square a Boolean matrix in place: The difference (which is significant!) lies in the order of the for loops. Property 19.7 With Warshall's algorithm, we can compute the transitive closure of a digraph in time proportional to V3. Proof: The running time is immediately evident from the structure of the code. We prove that it computes the transitive closure by induction on i. After the first iteration of the loop, the matrix has true in row s and column t if and only if the digraph has either the edge s-t or the path s-0-t. The second iteration checks all the paths between s and t that include 1 and perhaps 0, such as s-1-t, s-1-0-t, and s-0-1-t. We are led to the following inductive hypothesis: The ith iteration of the loop sets the bit in row s and column t in the matrix to true if and only if there is a directed path from s to t in the digraph that does not include any vertices with indices greater than i (except possibly the endpoints s and t). As just argued, the condition is true when i is 0, after the first iteration of the loop. Assuming that it is true for the ith iteration of the loop, there is a path from s to t that does not include any vertices with indices greater than i+1 if and only if (i) there is a path from s to t that does not include any vertices with indices greater than i, in which case A[s][t] was set on a previous iteration of the loop (by the inductive hypothesis); or (ii) there is a path from s to i+1 and a path from i+1 to t, neither of which includes any vertices with indices greater than i (except endpoints), in which case A[s][i+1] and A[i+1][t] were previously set to true (by hypothesis), so the inner loop sets A[s][t]. Screenshot


We can improve the performance of Warshall's algorithm with a simple transformation of the code: We move the test of A[s][i] out of the inner loop because its value does not change as t varies. This move allows us to avoid executing the t loop entirely when A[s][i] is zero. The savings that we achieve from this improvement depends on the digraph and is substantial for many digraphs (see Exercises 19.53 and 19.54). Program 19.3 implements this improvement and packages Warshall's method such that clients can preprocess a digraph (compute the transitive closure), then compute the answer to any reachability query in constant time. We are interested in pursuing more efficient solutions, particularly for sparse digraphs. We would like to reduce both the preprocessing time and the space because both make the use of Warshall's method prohibitively costly for huge sparse digraphs.

Warshall's algorithm

The constructor for class GraphTC computes the transitive closure of G in the private data field T so that clients can use GraphTC objects to test whether any given vertex in a digraph is reachable from any other given vertex. The constructor initializes T with a copy of G, adds self-loops, then uses Warshall's algorithm to complete the computation. We use a DenseGraph object for the transitive closure T because the algorithm needs an efficient implementation of the edge existence test (see ).

class GraphTC
{ private DenseGraph T;
 GraphTC(Graph G)
 {
 T = GraphUtilities.densecopy(G);
 for (int s = 0; s < T.V(); s++)
 T.insert(new Edge(s, s));
 for (int i = 0; i < T.V(); i++)
 for (int s = 0; s < T.V(); s++)
 if (T.edge(s, i))
 for (int t = 0; t < T.V(); t++)
 if (T.edge(i, t))
 T.insert(new Edge(s, t));
 }
 boolean reachable(int s, int t)
 { return T.edge(s, t); }
}

In modern apps, abstract data types provide us with the ability to separate out the idea of an operation from any particular implementation so that we can focus on efficient implementations. For the transitive closure, this point of view leads to a recognition that we do not necessarily need to compute the entire matrix to provide clients with the transitive-closure abstraction. One possibility might be that the transitive closure is a huge sparse matrix, so an adjacency-lists representation is called for because we cannot store the matrix representation. Even when the transitive closure is dense, client programs might test only a tiny fraction of possible pairs of edges, so computing the whole matrix is wasteful. We use the term abstract transitive closure to refer to an ADT that provides clients with the ability to test reachability after preprocessing a digraph, like Program 19.3. In this context, we need to measure an algorithm not just by its cost to compute the transitive closure (preprocessing cost) but also by the space required and the query time achieved. That is, we rephrase Property 19.7 as follows: Property 19.8 We can support constant-time reachability testing (abstract transitive closure) for a digraph, using space proportional to V2 and time proportional to V3 for preprocessing. This property follows immediately from the basic performance characteristics of Warshall's algorithm. Screenshot


For most apps, our goal is not just to compute the transitive closure of a digraph quickly but also to support constant query time for the abstract transitive closure using far less space and far less preprocessing time than specified in Property 19.8. Can we find an implementation that will allow us to build clients that can afford to handle such digraphs? We return to this question in . There is an intimate relationship between the problem of computing the transitive closure of a digraph and a number of other fundamental computational problems, and that relationship can help us to understand this problem's difficulty. We conclude this section by considering two examples of such problems. First, we consider the relationship between the transitive closure and the all-pairs shortest-paths problem (see ). For digraphs, the problem is to find, for each pair of vertices, a directed path with a minimal number of edges. Given a digraph, we initialize a V-by-V integer matrix A by setting A[s][t] to true if there is an edge from s to t and to the sentinel value V if there is no such edge. Our goal is to set A[s][t] equal to the length of (the number of edges on) a shortest directed path from s to t, using the sentinel value V to indicate that there is no such path. The following code accomplishes this objective:

for (i = 0; i < V; i++)
 for (s = 0; s < V; s++)
 for (t = 0; t < V; t++)
 if (A[s][i] + A[i][t] < A[s][t])
 A[s][t] = A[s][i] + A[i][t];


This code differs from the version of Warshall's algorithm that we saw just before Property 19.7 in only the if statement in the inner loop. Indeed, in the proper abstract setting, the computations are precisely the same (see Exercises 19.55 and 19.56). Converting the proof of Property 19.7 into a direct proof that this method accomplishes the desired objective is straightforward. This method is a special case of Floyd's algorithm for finding shortest paths in weighted graphs (see ). The BFS-based solution for undirected graphs that we considered in also finds shortest paths in digraphs (appropriately modified). Shortest paths are the subject of , so we defer considering detailed performance comparisons until then. Second, as we have seen, the transitive-closure problem is also closely related to the Boolean matrix-multiplication problem. The basic algorithms that we have seen for both problems require time proportional to V3, using similar computational schema. Boolean matrix multiplication is known to be a difficult computational problem: Algorithms that are asymptotically faster than the straightforward method are known, but it is debatable whether the savings are sufficiently large to justify the effort of implementing any of them. This fact is significant in the present context because we could use a fast algorithm for Boolean matrix multiplication to develop a fast transitive-closure algorithm (slower by just a factor of lg V) using the repeated-squaring method illustrated in Screenshot. Conversely, we have a lower bound on the difficulty of computing the transitive closure: Property 19.9 We can use any transitive-closure algorithm to compute the product of two Boolean matrices with at most a constant-factor difference in running time. Proof: Given two V-by-V Boolean matrices A and B, we construct the following 3V-by-3V matrix: Java graphics 176equ01.gif


Here, 0 denotes the V-by-V matrix with all entries equal to 0, and I denotes the V-by-V identity matrix with all entries equal to 0 except those on the diagonal, which are equal to 1. Now, we consider this matrix to be the adjacency matrix for a digraph and compute its transitive closure by repeated squaring. But we only need one step: Java graphics 176equ02.gif


The matrix on the right-hand side of this equation is the transitive closure because further multiplications give back the same matrix. But this matrix has the V-by-V product A * B in its upper-right corner. Whatever algorithm we use to solve the transitive-closure problem, we can use it to solve the Boolean matrix-multiplication problem at the same cost (to within a constant factor). Screenshot


The significance of this property depends on the conviction of experts that Boolean matrix multiplication is difficult: Mathematicians have been working for decades to try to learn precisely how difficult it is, and the question is unresolved; the best known results say that the running time should be proportional to about V2.5 (see reference section). Now, if we could find a linear-time (proportional to V2) solution to the transitive-closure problem, then we would have a linear-time solution to the Boolean matrix-multiplication problem as well. This relationship between problems is known as reduction: We say that the Boolean matrix-multiplication problem reduces to the transitive-closure problem (see and Part 8). Indeed, the proof actually shows that Boolean matrix multiplication reduces to finding the paths of length 2 in a digraph. Despite a great deal of research by many people, no one has been able to find a linear-time Boolean matrix-multiplication algorithm, so we cannot present a simple linear-time transitive-closure algorithm. On the other hand, no one has proved that no such algorithm exists, so we hold open that possibility for the future. In short, we take Property 19.9 to mean that, barring a research breakthrough, we cannot expect the worst-case running time of any transitive-closure algorithm that we can concoct to be proportional to V2. Despite this conclusion, we can develop fast algorithms for certain classes of digraphs. For example, we have already touched on a simple method for computing the transitive closure that is much faster than Warshall's algorithm for sparse digraphs. Property 19.10 With DFS, we can support constant query time for the abstract transitive closure of a digraph, with space proportional to V2 and time proportional to V (E + V) for preprocessing (computing the transitive closure). Proof: As we observed in the previous section, DFS gives us all the vertices reachable from the start vertex in time proportional to E, if we use the adjacency-lists representation (see Property 19.5 and Screenshot). Therefore, if we run DFS V times, once with each vertex as the start vertex, then we can compute the set of vertices reachable from each vertex—the transitive closure—in time proportional to V(E + V). The same argument holds for any linear-time generalized search (see and Exercise 19.66). Screenshot


DFS-based transitive closure

This DFS class implements the same interface as does Program 19.3. It computes the transitive closure T by doing a separate DFS starting at each vertex of G to compute its set of reachable nodes. Each call on the recursive method adds an edge from the start vertex and makes recursive calls to fill the corresponding row in the transitive-closure matrix. Again, we use a DenseGraph object for the transitive closure T because the algorithm uses the edge existence test to check for visited vertices during the DFS.

class GraphTC
{ private Graph G;
 private DenseGraph T;
 void tcR(int v, int w)
 {
 T.insert(new Edge(v, w));
 AdjList A = G.getAdjList(w);
 for (int t = A.beg(); !A.end(); t = A.nxt())
 if (!T.edge(v, t)) tcR(v, t);
 }
 GraphTC(Graph G)
 { this.G = G; T = new DenseGraph(G.V(), true);
 for (int v = 0; v < T.V(); v++) tcR(v, v); }
 boolean reachable(int v, int w)
 { return T.edge(v, w); }
}

Program 19.4 is an implementation of this search-based transitive-closure algorithm. This class implements the same interface as does Program 19.3. The result of running this program on the sample digraph in Screenshot is illustrated in the first tree in each forest in Screenshot. For sparse digraphs, this search-based approach is the method of choice. For example, if E is proportional to V, then Program 19.4 computes the transitive closure in time proportional to V2. How can it do so, given the reduction to Boolean matrix multiplication that we just considered? The answer is that this transitive-closure algorithm does indeed give an optimal way to multiply certain types of Boolean matrices (those with O(V) nonzero entries). The lower bound tells us that we should not expect to find a transitive-closure algorithm that runs in time proportional to V2 for all digraphs, but it does not preclude the possibility that we might find algorithms, like this one, that are faster for certain classes of digraphs. If such graphs are the ones that we need to process, the relationship between transitive closure and Boolean matrix multiplication may not be relevant to us. It is easy to extend the methods that we have described in this section to provide clients with the ability to find a specific path connecting two vertices by keeping track of the search tree, as described in . We consider specific ADT implementations of this sort in the context of the more general shortest-paths problems in . Table 19.1 shows empirical results comparing the elementary transitive-closure algorithms described in this section. The adjacency-lists implementation of the search-based solution is by far the fastest method for sparse digraphs. The implementations all compute an adjacency matrix (of size V2), so none of them are suitable for huge sparse digraphs. For sparse digraphs whose transitive closure is also sparse, we might use an adjacency-lists implementation for the closure so that the size of the output is proportional to the number of edges in the transitive closure. This number certainly is a lower bound on the cost of computing the transitive closure, which we can achieve for certain types of digraphs using various algorithmic techniques (see Exercises 19.64 and 19.65). Despite this possibility, we generally view the objective of a transitive-closure computation to be dense, so we can use an adjacency-array representation and can easily answer reachability queries, and we regard transitive-closure algorithms that compute the matrix in time proportional to V2 as being optimal since they take time proportional to the size of their output.

Table 19.1. Empirical study of transitive-closure algorithms

This table shows running times that exhibit dramatic performance differences for various algorithms for computing the transitive closure of random digraphs, both dense and sparse. For all but the adjacency-lists DFS, the running time goes up by a factor of 8 when we double V, which supports the conclusion that it is essentially proportional to V3. The adjacency-lists DFS takes time proportional to VE, which explains the running time roughly increasing by a factor of 4 when we double both V and E (sparse graphs) and by a factor of about 2 when we double E (dense graphs), except that list-traversal overhead degrades performance for high-density graphs.

sparse (10V edges)

dense (250 vertices)

V

W

W*

A

L

E

W

W*

A

L

25

0

0

1

0

5000

289

203

177

23

50

3

1

2

1

10000

300

214

184

38

125

35

24

23

4

25000

309

226

200

97

250

275

181

178

13

50000

315

232

218

337

500

2222

1438

1481

54

100000

326

246

235

784

Key: W Warshall's algorithm () W* Improved Warshall's algorithm (Program 19.3) A DFS, adjacency-matrix representation (Programs 19.4 and 17.7) L DFS, adjacency-lists representation (Programs 19.4 and 17.9)

If the adjacency matrix is symmetric, it is equivalent to an undirected graph, and finding the transitive closure is the same as finding the connected components—the transitive closure is the union of complete graphs on the vertices in the connected components (see Exercise 19.48). Our connectivity algorithms in amount to an abstract–transitive-closure computation for symmetric digraphs (undirected graphs) that uses space proportional to V and still supports constant-time reachability queries. Can we achieve this performance for general digraphs? Can we reduce the preprocessing time still further? For what types of graphs can we compute the transitive closure in linear time? To answer these questions, we need to study the structure of digraphs in more detail, including, specifically, that of DAGs.

Exercises

Java graphics ltr.gif 19.46 What is the transitive closure of a digraph that consists solely of a directed cycle with V vertices?

How many edges are there in the transitive closure of a digraph that consists solely of a simple directed path with V vertices?

Java graphics ltr.gif 19.48 Give the transitive closure of the undirected graph Java graphics 181equ01.gif


• 19.49 Show how to construct a digraph with V vertices and E edges with the property that the number of edges in the transitive closure is proportional to t, for any t between E and V2. As usual, assume that E > V.

Give a formula for the number of edges in the transitive closure of a digraph that is a directed forest as a function of structural properties of the forest.

Show, in the style of Screenshot, the process of computing the transitive closure of the digraph Java graphics 181equ02.gif


through repeated squaring.

Show, in the style of Screenshot, the process of computing the transitive closure of the digraph]

Screenshot Warshall's algorithm

This sequence shows the development of the transitive closure (bottom) of an example digraph (top) as computed with Warshall's algorithm. The first iteration of the loop (left column, top) adds the edges 1-2 and 1-5 because of the paths 1-0-2 and 1-0-5, which include vertex 0 (but no vertex with a higher number); the second iteration of the loop (left column, second from top) adds the edges 2-0 and 2-5 because of the paths 2-1-0 and 2-1-0-5, which include vertex 1 (but no vertex with a higher number); and the third iteration of the loop (left column, bottom) adds the edges 0-1, 3-0, 3-1, and 3-5 because of the paths 0-2-1, 3-2-1-0, 3-2-1, and 3-2-1-0-5, which include vertex 2 (but no vertex with a higher number). The right column shows the edges added when paths through 3, 4, and 5 are considered. The last iteration of the loop (right column, bottom) adds the edges from 0, 1, and 2, to 4, because the only directed paths from those nodes to 4 include 5, the highest-numbered vertex.

Java graphics 19fig16.gif


Java graphics 181equ02.gif


with Warshall's algorithm.

Screenshot 19.53 Give a family of sparse digraphs for which the improved version of Warshall's algorithm for computing the transitive closure (Program 19.3) runs in time proportional to VE.

Screenshot 19.54 Find a sparse digraph for which the improved version of Warshall's algorithm for computing the transitive closure (Program 19.3) runs in time proportional to V3.

Screenshot 19.55 Develop an abstract class which you can extend to implement both Warshall's algorithm and Floyd's algorithm. (This exercise is a version of Exercise 19.56 for people who are more familiar with abstract data types than with abstract algebra.)

• 19.56 Use abstract algebra to develop a generic algorithm that encompasses both Warshall's algorithm and Floyd's algorithm. (This exercise is a version of Exercise 19.55 for people who are more familiar with abstract algebra than with abstract data types.)

Screenshot 19.57 Show, in the style of Screenshot, the development of the all-shortest paths matrix for the example graph in the figure as computed with Floyd's algorithm.

Is the Boolean product of two symmetric matrices symmetric? Explain your answer.

Add a method to Programs 19.3 and 19.4 to allow clients to use GraphTC objects to find the number of edges in the transitive closure.

Design a way to maintain the count of the number of edges in the transitive closure by modifying it when edges are added and removed. Give the cost of adding and removing edges with your scheme.

Java graphics ltr.gif 19.61 Add a method for use with Programs 19.3 and 19.4 that returns a vertex-indexed array that indicates which vertices are reachable from a given vertex.

Screenshot 19.62 Run empirical studies to determine the number of edges in the transitive closure, for various types of digraphs (see Exercises 19.1118).

• 19.63 Consider the bit-matrix graph representation that is described in Exercise 17.23. Which method can you speed up by a factor of B (where B is the number of bits per word on your computer): Warshall's algorithm or the DFS-based algorithm? Justify your answer by developing an implementation that does so.

Screenshot 19.64 Give a program that computes the transitive closure of a digraph that is a directed forest in time proportional to the number of edges in the transitive closure.

Screenshot 19.65 Implement an abstract–transitive-closure algorithm for sparse graphs that uses space proportional to T and can answer reachability requests in constant time after preprocessing time proportional to VE + T, where T is the number of edges in the transitive closure. Hint: Use dynamic hashing.

Provide a version of Program 19.4 that is based on generalized graph search (see ), and run empirical studies to see whether the choice of graph-search algorithm has any effect on performance.



   
Comments