Previous Next 
AugmentingPath Maxflow AlgorithmsAn effective approach to solving maxflow problems was developed by L. R. Ford and D. R. Fulkerson in 1962. It is a generic method for increasing flows incrementally along paths from source to sink that serves as the basis for a family of algorithms. It is known as the Ford–Fulkerson method in the classical literature; the more descriptive term augmentingpath method is also widely used. Consider any directed path (not necessarily a simple one) from source to sink through an stnetwork. Let x be the minimum of the unused capacities of the edges on the path. We can increase the network's flow value by at least x by increasing the flow in all edges on the path by that amount. Iterating this action, we get a first attempt at computing flow in a network: Find another path, increase the flow along that path, and continue until all paths from source to sink have at least one full edge (so that we can no longer increase flow in this way). This algorithm will compute the maxflow in some cases, but will fall short in other cases. Screenshot illustrates a case where it fails. To improve the algorithm such that it always finds a maxflow, we consider a more general way to increase the flow, along any path from source to sink through the network's underlying undirected graph. The edges on any such path are either forward edges, which go with the flow (when we traverse the path from source to sink, we traverse the edge from its source vertex to its destination vertex) or backward edges, which go against the flow (when we traverse the path from source to sink, we traverse the edge from its destination vertex to its source vertex). Now, for any path with no full forward edges and no empty backward edges, we can increase the amount of flow in the network by increasing flow in forward edges and decreasing flow in backward edges. The amount by which the flow can be increased is limited by the minimum of the unused capacities in the forward edges and the flows in the backward edges. Screenshot depicts an example. In the new flow, at least one of the forward edges along the path becomes full or at least one of the backward edges along the path becomes empty. Screenshot Augmenting flow along a pathThis sequence shows the process of increasing flow in a network along a path of forward and backward edges. Starting with the flow depicted at the left and reading from left to right, we increase the flow in 02 and then 23 (additional flow is shown in black). Then we decrease the flow in 13 (shown in white) and divert it to 14 and then 45, resulting in the flow at the right. The process just sketched is the basis for the classical Ford–Fulkerson maxflow algorithm (augmentingpath method). We summarize it as follows:
Remarkably, this method always finds a maxflow, no matter how we choose the paths. Like the MST method discussed in and the Bellman–Ford shortestpaths method discussed in , it is a generic algorithm that is useful because it establishes the correctness of a whole family of more specific algorithms. We are free to use any method whatever to choose the path. Screenshot illustrates several different sequences of augmenting paths that all lead to a maxflow for a sample network. Later in this section, we examine several algorithms that compute sequences of augmenting paths, all of which lead to a maxflow. The algorithms differ in the number of augmenting paths they compute, the lengths of the paths, and the costs of finding each path, but they all implement the Ford–Fulkerson algorithm and find a maxflow. Screenshot Augmentingpath sequencesIn these three examples, we augment a flow along different sequences of augmenting paths until no augmenting path can be found. The flow that results in each case is a maximum flow. The key classical theorem in the theory of network flows states that we get a maximum flow in any network, no matter what sequence of paths we use (see Property 22.5).. To show that any flow computed by any implementation of the Ford–Fulkerson algorithm indeed has maximal value, we show that this fact is equivalent to a key fact known as the maxflow–mincut theorem. Understanding this theorem is a crucial step in understanding networkflow algorithms. As suggested by its name, the theorem is based on a direct relationship between flows and cuts in networks, so we begin by defining terms that relate to cuts. Recall from that a cut in a graph is a partition of the vertices into two disjoint sets, and a crossing edge is an edge that connects a vertex in one set to a vertex in the other set. For flow networks, we refine these definitions as follows (see Screenshot). Screenshot stcut terminologyAn stnetwork has one source s and one sink t. An stcut is a partition of the vertices into a set containing s (white) and another set containing t (black). The edges that connect a vertex in one set with a vertex in the other (highlighted in gray) are known as a cut set. A forward edge goes from a vertex in the set containing s to a vertex in the set containing t; a backward edge goes the other way. There are four forward edges and two backward edges in the cut set shown here. Definition 22.3 An stcut is a cut that places vertex s in one of its sets and vertex t in the other. Each crossing edge corresponding to an stcut is either an stedge that goes from a vertex in the set containing s to a vertex in the set containing t, or a tsedge that goes in the other direction. We sometimes refer to the set of crossing edges as a cut set. The capacity of an stcut in a flow network is the sum of the capacities of that cut's stedges, and the flow across an stcut is the difference between the sum of the flows in that cut's stedges and the sum of the flows in that cut's tsedges. Removing a cut set divides a connected graph into two connected components, leaving no path connecting any vertex in one to any vertex in the other. Removing all the edges in an stcut of a network leaves no path connecting s to t in the underlying undirected graph, but adding any one of them back could create such a path. Cuts are the appropriate abstraction for the app mentioned at the beginning of the chapter where a flow network describes the movement of supplies from a depot to the troops of an army. To cut off supplies completely and in the most economical manner, an enemy might solve the following problem: Minimum cut Given an stnetwork, find an stcut such that the capacity of no other cut is smaller. For brevity, we refer to such a cut as a mincut and to the problem of finding one in a network as the mincut problem. The mincut problem is a generalization of the connectivity problems that we discussed briefly in . We analyze specific relationships in detail in . The statement of the mincut problem includes no mention of flows, and these definitions might seem to digress from our discussion of the augmentingpath algorithm. On the surface, computing a mincut (a set of edges) seems easier than computing a maxflow (an assignment of weights to all the edges). On the contrary, the key fact of this chapter is that the maxflow and mincut problems are intimately related. The augmentingpath method itself, in conjunction with two facts about flows and cuts, provides a proof. Property 22.3 For any stflow, the flow across each stcut is equal to the value of the flow. Proof: This property is an immediate consequence of the generalization of Property 22.1 that we discussed in the associated proof (see Screenshot). Add an edge ts with flow equal to the value of the flow such that inflow is equal to outflow for any set of vertices. Then, for any stcut where C_{s} is the vertex set containing s and C_{t} is the vertex set containing t, the inflow to C_{s} is the inflow to s (the value of the flow) plus the sum of the flows in the backward edges across the cut; and the outflow from C_{s} is the sum of the flows in the forward edges across the cut. Setting these two quantities equal establishes the desired result. Property 22.4 No stflow's value can exceed the capacity of any stcut. Proof: The flow across a cut certainly cannot exceed that cut's capacity, so this result is immediate from Property 22.3. In other words, cuts represent bottlenecks in networks. In our military app, an enemy that is not able to cut off army troops completely from their supplies could still be sure that supply flow is restricted to at most the capacity of any given cut. We certainly might imagine that the cost of making a cut is proportional to its capacity in this app, thus motivating the invading army to find a solution to the mincut problem. More important, these facts also imply, in particular, that no flow can have value higher than the capacity of any minimum cut. Property 22.5 (Maxflow–mincut theorem) The maximum value among all stflows in a network is equal to the minimum capacity among all stcuts. Proof: It suffices to exhibit a flow and a cut such that the value of the flow is equal to the capacity of the cut. The flow has to be a maxflow because no other flow value can exceed the capacity of the cut and the cut has to be a minimum cut because no other cut capacity can be lower than the value of the flow (by Property 22.4). The Ford–Fulkerson algorithm gives precisely such a flow and cut: When the algorithm terminates, identify the first full forward or empty backward edge on every path from s to t in the graph. Let C_{s} be the set of all vertices that can be reached from s with an undirected path that does not contain a full forward or empty backward edge, and let C_{t} be the remaining vertices. Then, t must be in C_{t}, so (C_{s}, C_{t}) is an stcut, whose cut set consists entirely of full forward or empty backward edges. The flow across this cut is equal to the cut's capacity (since forward edges are full and the backward edges are empty) and also to the value of the network flow (by Property 22.3). This proof also establishes explicitly that the Ford–Fulkerson algorithm finds a maxflow. No matter what method we choose to find an augmenting path and no matter what paths we find, we always end up with a cut whose flow is equal to its capacity and therefore is also equal to the value of the network's flow, which therefore must be a maxflow. Another implication of the correctness of the Ford–Fulkerson algorithm is that, for any flow network with integer capacities, there exists a maxflow solution where the flows are all integers. Each augmenting path increases the flow by a positive integer (the minimum of the unused capacities in the forward edges and the flows in the backward edges, all of which are always positive integers). This fact justifies our decision to restrict our attention to integer capacities and flows. It is possible to design a maxflow with noninteger flows, even when capacities are all integers (see Exercise 22.23), but we do not need to consider such flows. This restriction is important: Generalizing to allow capacities and flows that are real numbers can lead to unpleasant anomalous situations. For example, the Ford–Fulkerson algorithm might lead to an infinite sequence of augmenting paths that does not even converge to the maxflow value (see reference section). The generic Ford–Fulkerson algorithm does not specify any particular method for finding an augmenting path. Perhaps the most natural way to proceed is to use the generalized graphsearch strategy of . To this end, we begin with the following definition: Definition 22.4 Given a flow network and a flow, the residual network for the flow has the same vertices as the original and one or two edges in the residual network for each edge in the original, defined as follows: For each edge vw in the original, let f be the flow and c the capacity. If f is positive, include an edge wv in the residual with capacity f; and if f is less than c, include an edge vw in the residual with capacity cf. If vw is empty (f is equal to 0), there is a single corresponding edge vw with capacity c in the residual; if vw is full (f is equal to c), there is a single corresponding edge wv with capacity f in the residual; and if vw is neither empty nor full, both vw and wv are in the residual with their respective capacities (see Screenshot). Screenshot All stcutsThis list gives, for all the stcuts of the network at left, the vertices in the set containing s, the vertices in the set containing t, forward edges, backward edges, and capacity (sum of capacities of the forward edges). For any flow, the flow across all the cuts (flow in forward edges minus flow in backward edges) is the same. For example, for the flow in the network at left, the flow across the cut separating 0 1 3 and 2 4 5 is 2 + 1 + 2 (the flow in 02, 14, and 35, respectively) minus 1 (the flow in 23), or 4. This calculation also results in the value 4 for every other cut in the network, and the flow is a maximum flow because its value is equal to the capacity of the minimum cut (see Property 22.5). There are two minimum cuts in this network. Screenshot Residual networks (augmenting paths)Finding augmenting paths in a flow network is equivalent to finding directed paths in the residual network that is defined by the flow. For each edge in the flow network, we create an edge in each direction in the residual network: one in the direction of the flow with weight equal to the unused capacity, and one in the opposite direction with weight equal to the flow. We do not include edges of weight 0 in either case. Initially (top), the residual network is the same as the flow network with weights equal to capacities. When we augment along the path 0135 (second from top), we fill edges 01 and 35 to capacity so that they switch direction in the residual network, we reduce the weight of 13 to correspond to the remaining flow, and we add the edge 31 of weight 2. Similarly, when we augment along the path 0245, we fill 24 to capacity so that it switches direction, and we have edges in either direction between 0 and 2 and between 4 and 5 to represent flow and unused capacity. After we augment along 023145 (bottom), no directed paths from source to sink remain in the residual network, so there are no augmenting paths. Program 22.2 defines the Edge class that we use to implement the residual network abstraction. With such an implementation, we continue to work exclusively with references to client edges. Our algorithms work with the residual network, but they are actually examining capacities and changing flow (through edge references) in the client's edges. The methods from and other allow us to process edges in either orientation: e.other(v) returns the endpoint of e that is not v. The methods capRto and addflowRto implement the residual network: If e is a reference to an edge vw with capacity c and flow f, then e.capRto(w) is cf and e.capRto(v) is f; e.addflowRto(w, d) adds d to the flow; and e.addflowRto(v, d) subtracts d from the flow. Residual networks allow us to use any generalized graph search (see ) to find an augmenting path, since any path from source to sink in the residual network corresponds directly to an augmenting path in the original network. Increasing the flow along the path implies making changes in the residual network: For example, at least one edge on the path becomes full or empty, so at least one edge in the residual network changes direction or disappears (but our use of an abstract residual network means that we just check for positive capacity and do not need to actually insert and delete edges). Screenshot shows a sequence of augmenting paths and the corresponding residual networks for an example.
Program 22.3 is a priorityqueue–based implementation that encompasses all these possibilities, using the slightly modified version of our PFS graphsearch implementation from Program 21.1 that is shown in Program 22.4. This implementation allows us to choose among several different classical implementations of the Ford–Fulkerson algorithm, simply by setting priorities so as to implement various data structures for the fringe. As discussed in , using a priority queue to implement a stack, queue, or randomized queue for the fringe data structure incurs an extra factor of lg V in the cost of fringe operations. Since we could avoid this cost by using a generalizedqueue ADT in an implementation like Program 18.10 with direct implementations, we assume when analyzing the algorithms that the costs of fringe operations are constant in these cases. By using the single implementation in Program 22.3, we emphasize the direct relationships amoung various Ford–Fulkerson implementations. Although it is general, Program 22.3 does not encompass all implementations of the Ford–Fulkerson algorithm (see, for example, Exercises 22.36 and 22.38). Researchers continue to develop new ways to implement the algorithm. But the family of algorithms encompassed by Program 22.3 is widely used, gives us a basis for understanding computation of maxflows, and introduces us to straightforward implementations that perform well on practical networks. As we soon see, these basic algorithmic tools get us simple (and useful, for many apps) solutions to the networkflow problem. A complete analysis establishing which specific method is best is a complex task, however, because their running times depend on
These quantities can vary widely, depending on the network being processed and on the graphsearch strategy (fringe data structure). Perhaps the simplest Ford–Fulkerson implementation uses the shortest augmenting path (as measured by the number of edges on the path, not flow or capacity). This method was suggested by Edmonds and Karp in 1972. To implement it, we use a queue for the fringe, either by using the value of an increasing counter for P or by using a queue ADT instead of a priorityqueue ADT in Program 22.3. In this case, the search for an augmenting path amounts to breadthfirst search (BFS) in the residual network, precisely as described in s 18.8 and 21.2. Screenshot shows this implementation of the Ford–Fulkerson method in operation on a sample network. For brevity, we refer to this method as the shortestaugmentingpath maxflow algorithm. As is evident from the figure, the lengths of the augmenting paths form a nondecreasing sequence. Our analysis of this method, in Property 22.7, proves that this property is characteristic. Screenshot Shortest augmenting pathsThis sequence illustrates how the shortestaugmentingpath implementation of the Ford–Fulkerson method finds a maximum flow in a sample network. Path lengths increase as the algorithm progresses: The first four paths in the top row are of length 3; the last path in the top row and all of the paths in the second row are of length 4; the first two paths in the bottom row are of length 5; and the process finishes with two paths of length 7 that each have a backward edge. Another Ford–Fulkerson implementation suggested by Edmonds and Karp is the following: Augment along the path that increases the flow by the largest amount. The priority value P that is used in Program 22.3 implements this method. This priority makes the algorithm choose edges from the fringe to give the maximum amount of flow that can be pushed through a forward edge or diverted from a backward edge. For brevity, we refer to this method as the maximumcapacity–augmentingpath maxflow algorithm. Screenshot illustrates the algorithm on the same flow network as that in Screenshot. Screenshot Maximumcapacity augmenting pathsThis sequence illustrates how the maximumcapacity–augmentingpath implementation of the Ford–Fulkerson method finds a maxflow in a sample network. Path capacities decrease as the algorithm progresses, but their lengths may increase or decrease. The method needs only nine augmenting paths to compute the same maxflow as the one depicted in Screenshot. These are but two examples (ones we can analyze!) of Ford–Fulkerson implementations. At the end of this section, we consider others. Before doing so, we consider the task of analyzing augmentingpath methods in order to learn their properties and, ultimately, to decide which one will have the best performance. In trying to choose among the family of algorithms represented by Program 22.3, we are in a familiar situation. Should we focus on worstcase performance guarantees, or do those represent a mathematical fiction that may not relate to networks that we encounter in practice? This question is particularly relevant in this context, because the classical worstcase performance bounds that we can establish are much higher than the actual performance results that we see for typical graphs. Many factors further complicate the situation. For example, the worstcase running time for several versions depends not just on V and E, but also on the values of the edge capacities in the network. Developing a maxflow algorithm with fast guaranteed performance has been a tantalizing problem for several decades, and numerous methods have been proposed. Evaluating all these methods for all the types of networks that are likely to be encountered in practice, with sufficient precision to allow us to choose among them, is not as clearcut as is the same task for other situations that we have studied, such as typical practical apps of sorting or searching algorithms. Keeping these difficulties in mind, we now consider the classical results about the worstcase performance of the Ford–Fulkerson method: one general bound and two specific bounds, one for each of the two augmentingpath algorithms that we have examined. These results serve more to give us insight into characteristics of the algorithms than to allow us to predict performance to a sufficient degree of accuracy for meaningful comparison. We discuss empirical comparisons of the methods at the end of the section. Property 22.6 Let M be the maximum edge capacity in the network. The number of augmenting paths needed by any implementation of the Ford–Fulkerson algorithm is at most equal to VM. Proof: Any cut has at most V edges, of capacity M, for a total capacity of VM. Every augmenting path increases the flow through every cut by at least 1, so the algorithm must terminate after VM passes, since all cuts must be filled to capacity after that many augmentations. As discussed below, such a bound is of little use in typical situations because M can be a very large number. Worse, it is easy to describe situations where the number of iterations is proportional to the maximum edge capacity. For example, suppose that we use a longest augmentingpath algorithm (perhaps based on the intuition that the longer the path, the more flow we put on the network's edges). Since we are counting iterations, we ignore, for the moment, the cost of computing such a path. The (classical) example shown in Screenshot shows a network for which the number of iterations of a longest augmentingpath algorithm is equal to the maximum edge capacity. This example tells us that we must undertake a more detailed scrutiny to know whether other specific implementations use substantially fewer iterations than are indicated by Property 22.6. Screenshot Two scenarios for the Ford–Fulkerson algorithmThis network illustrates that the number of iterations used by the Ford–Fulkerson algorithm depends on the capacities of the edges in the network and the sequence of paths chosen by the implementation. It consists of four edges of capacity X and one of capacity 1. The scenario depicted at the top shows that an implementation that alternates between using 0123 and 0213 as augmenting paths (for example, one that prefers long paths) would require X pairs of iterations like the two pairs shown, each pair incrementing the total flow by 2. The scenario depicted at the bottom shows that an implementation that chooses 013 and then 023 as augmenting paths (for example, one that prefers short paths) finds the maximum flow in just two iterations. If edge capacities are, say, 32bit integers, the scenario depicted at the top would be billions of times slower than the scenario depicted the bottom. For sparse networks and networks with small integer capacities, Property 22.6 does give an upper bound on the running time of any Ford–Fulkerson implementation that is useful. Corollary The time required to find a maxflow is O(VEM), which is O(V^{2}M) for sparse networks. Proof: Immediate from the basic result that generalized graph search is linear in the size of the graph representation (Property 18.12). As mentioned, we need an extra lg V factor if we are using a priorityqueue fringe implementation. The proof actually establishes that the factor of M can be replaced by the ratio between the largest and smallest nonzero capacities in the network (see Exercise 22.25). When this ratio is low, the bound tells us that any Ford–Fulkerson implementation will find a maxflow in time proportional to the time required to (for example) solve the allshortestpaths problem, in the worst case. There are many situations where the capacities are indeed low and the factor of M is of no concern. We will see an example in . When M is large, the VEM worstcase bound is high; but it is pessimistic, as we obtained it by multiplying together worstcase bounds that derive from contrived examples. Actual costs on practical networks are typically much lower. From a theoretical standpoint, our first goal is to discover, using the rough subjective categorizations of , whether or not the maximumflow problem for networks with large integer weights is tractable (solvable by a polynomialtime algorithm). The bounds just derived do not resolve this question, because the maximum weight M = 2^{m} could grow exponentially with V and E. From a practical standpoint, we seek better performance guarantees. To pick a typical practical example, suppose that we use 32bit integers (m = 32) to represent edge weights. In a graph with hundreds of vertices and thousands of edges, the corollary to Property 22.6 says that we might have to perform hundreds of trillions of operations in an augmentingpath algorithm. If we are dealing with millions of vertices, this point is moot, not only because we will not have weights as large 2^{1000000}, but also because V^{3} and VE are so large as to make the bound meaningless. We are interested both in finding a polynomial bound to resolve the tractability question and in finding better bounds that are relevant for situations that we might encounter in practice. Property 22.6 is general: It applies to any Ford–Fulkerson implementation at all. The generic nature of the algorithm leaves us with a substantial amount of flexibility to consider a number of simple implementations in seeking to improve performance. We expect that specific implementations might be subject to better worstcase bounds. Indeed, that is one of our primary reasons for considering them in the first place! Now, as we have seen, implementing and using a large class of these implementations is trivial: We just substitute different generalizedqueue implementations or priority definitions in Program 22.3. Analyzing differences in worstcase behavior is more challenging, as indicated by the classical results that we consider next for the two basic augmentingpath implementations that we have considered. First, we analyze the shortestaugmentingpath algorithm. This method is not subject to the problem illustrated in Screenshot. Indeed, we can use it to replace the factor of M in the worstcase running time with VE/2, thus establishing that the networkflow problem is tractable. We might even classify it as being easy (solvable in polynomial time on practical cases by a simple, if clever, implementation). Property 22.7 The number of augmenting paths needed in the shortestaugmentingpath implementation of the Ford–Fulkerson algorithm is at most VE/2. Proof: First, as is apparent from the example in Screenshot, no augmenting path is shorter than a previous one. To establish this fact, we show by contradiction that a stronger property holds: No augmenting path can decrease the length of the shortest path from the source s to any vertex in the residual network. Suppose that some augmenting path does so, and that v is the first such vertex on the path. There are two cases to consider: Either no vertex on the new shorter path from s to v appears anywhere on the augmenting path or some vertex w on the new shorter path from s to v appears somewhere between v and t on the augmenting path. Both situations contradict the minimality of the augmenting path. Now, by construction, every augmenting path has at least one critical edge—an edge that is deleted from the residual network because it corresponds either to a forward edge that becomes filled to capacity or a backward edge that is emptied. Suppose that an edge uv is a critical edge for an augmenting path P of length d. The next augmenting path for which it is a critical edge has to be of length at least d+2, because that path has to go from s to v, then along vu, then from u to t. The first segment is of length at least 1 greater than the distance from s to u in P, and the final segment is of length at least 1 greater than the distance from v to t in P, so the path is of length at least 2 greater than P. Since augmenting paths are of length at most V, these facts imply that each edge can be the critical edge on at most V/2 augmenting paths, so the total number of augmenting paths is at most EV/2. Corollary The time required to find a maxflow in a sparse network is O(V^{3}). Proof: The time required to find an augmenting path is O(E), so the total time is O(VE^{2}). The stated bound follows immediately. The quantity V^{3} is sufficiently high that it does not provide a guarantee of good performance on huge networks. But that fact should not preclude us from using the algorithm on a huge network, because it is a worstcase performance result that may not be useful for predicting performance in a practical app. For example, as just mentioned, the maximum capacity M (or the maximum ratio between capacities) might be much less than V, so the corollary to Property 22.6 would provide a better bound. Indeed, in the best case, the number of augmenting paths needed by the Ford–Fulkerson method is the smaller of the outdegree of s or the indegree of t, which again might be far smaller than V. Given this range between best and worstcase performance, comparing augmentingpath algorithms solely on the basis of worstcase bounds is not wise. Still, other implementations that are nearly as simple as the shortestaugmentingpath method might admit better bounds or be preferred in practice (or both). For example, the maximumaugmentingpath algorithm used far fewer paths to find a maxflow than did the shortestaugmentingpath algorithm in the example illustrated in Figures 22.17 and 22.18. We now turn to the worstcase analysis of that algorithm. First, just as for Prim's algorithm and for Dijkstra's algorithm (see s 20.6 and 21.2), we can implement the priority queue such that the algorithm takes time proportional to V^{2} (for dense graphs) or (E + V) log V (for sparse graphs) per iteration in the worst case, although these estimates are pessimistic because the algorithm stops when it reaches the sink. We also have seen that we can do slightly better with advanced data structures. The more important and more challenging question is how many augmenting paths are needed. Property 22.8 The number of augmenting paths needed in the maximalaugmentingpath implementation of the Ford–Fulkerson algorithm is at most 2E lg M. Proof: Given a network, let F be its maxflow value. Let v be the value of the flow at some point during the algorithm as we begin to look for an augmenting path. Applying Property 22.2 to the residual network, we can decompose the flow into at most E directed paths that sum to F – v, so the flow in at least one of the paths is at least (F – v)/E. Now, either we find the maxflow sometime before doing another 2E augmenting paths or the value of the augmenting path after that sequence of 2E paths is less than (F – v)/2E, which is less than onehalf of the value of the maximum before that sequence of 2E paths. That is, in the worst case, we need a sequence of 2E paths to decrease the path value by a factor of 2. The first path value is at most M, which we need to decrease by a factor of 2 at most lg M times, so we have a total of at most lg M sequences of 2E paths. Corollary The time required to find a maxflow in a sparse network is O(V^{2} lg M lg V). Proof: Immediate from the use of a heapbased priorityqueue implementation, as for Properties 20.7 and 21.5. For values of M and V that are typically encountered in practice, this bound is significantly lower than the O(V^{3}) bound of the corollary to Property 22.7. In many practical situations, the maximumaugmentingpath algorithm uses significantly fewer iterations than does the shortestaugmentingpath algorithm, at the cost of a slightly higher bound on the work to find each path. There are many other variations to consider, as reflected by the extensive literature on maxflow algorithms. Algorithms with better worstcase bounds continue to be discovered, and no nontrivial lower bound has been proved—that is, the possibility of a simple lineartime algorithm remains. Although they are important from a theoretical standpoint, many of the algorithms are primarily designed to lower the worstcase bounds for dense graphs, so they do not offer substantially better performance than the maximumaugmentingpath algorithm for the kinds of sparse networks that we encounter in practice. Still, there remain many options to explore in pursuit of better practical maxflow algorithms. We briefly consider two more augmentingpath algorithms next; in , we consider another family of algorithms. One easy augmentingpath algorithm is to use the value of a decreasing counter for P or a stack implementation for the generalized queue in Program 22.3, making the search for augmenting paths like depthfirst search. Screenshot shows the flow computed for our small example by this algorithm. The intuition is that the method is fast, is easy to implement, and appears to put flow throughout the network. As we will see, its performance varies remarkably, from extremely poor on some networks to reasonable on others. Screenshot Stackbased augmentingpath searchThis sequence illustrates the result of using a stack for the generalized queue in our implementation of the Ford–Fulkerson method so that the path search operates like DFS. In this case, the method does about as well as BFS, but its somewhat erratic behavior is rather sensitive to the network representation and has not been analyzed. Another alternative is to use a randomizedqueue implementation for the generalized queue so that the search for augmenting paths is a randomized search. Screenshot shows the flow computed for our small example by this algorithm. This method is also fast and easy to implement; in addition, as we noted in , it may embody good features of both breadthfirst and depthfirst search. Randomization is a powerful tool in algorithm design, and this problem represents a reasonable situation in which to consider using it. Screenshot Randomized augmentingpath searchThis sequence illustrates the result of using a randomized queue for the fringe data structure in the augmentingpath search in the Ford–Fulkerson method. In this example, we happen upon the short highcapacity path and therefore need relatively few augmenting paths. While predicting the performance characteristics of this method is a challenging task, it performs well in many situations. We conclude this section by looking more closely at the methods that we have examined in order to see the difficulty of comparing them or attempting to predict performance for practical apps. As a start in understanding the quantitative differences that we might expect, we use two flownetwork models that are based on the Euclidean graph model that we have been using to compare other graph algorithms. Both models use a graph drawn from V points in the plane with random coordinates between 0 and 1 with edges connecting any two points within a fixed distance of each other. They differ in the assignment of capacities to the edges. The first model simply assigns the same constant value to each capacity. As discussed in , this type of networkflow problem is known to be easier than the general problem. For our Euclidean graphs, flows are limited by the outdegree of the source and the indegree of the sink, so the algorithms each need only a few augmenting paths. But the paths differ substantially for the various algorithms, as we soon see. The second model assigns random weights from some fixed range of values to the capacities. This model generates the type of networks that people typically envision when thinking about the problem, and the performance of the various algorithms on such networks is certainly instructive. Table 22.1. Empirical study of augmentingpath algorithms
Screenshot Random flow networksThis figure depicts maxflow computations on our random Euclidean graph, with two different capacity models. On the left, all edges are assigned unit capacities; on the right, edges are assigned random capacities. The source is near the middle at the top and the sink near the middle at the bottom. Illustrated top to bottom are the flows computed by the shortestpath, maximumcapacity, stackbased, and randomized algorithms, respectively. Since the vertices are not of high degree and the capacities are small integers, there are many different flows that achieve the maximum for these examples. The indegree of the sink is 6, so all the algorithms find the flow in the unitcapacity model on the left with six augmenting paths. The methods find augmenting paths that differ dramatically in character for the randomweight model on the right. In particular, the stackbased method finds long paths of low weight and even produces a flow with a disconnected cycle. Table 22.1 gives more detailed quantitative results for use of the four methods to compute the flows in Screenshot. An augmentingpath algorithm's performance depends not just on the number of augmenting paths but also on the lengths of such paths and on the cost of finding them. In particular, the running time is proportional to the number of edges examined in the inner loop of Program 22.3. As usual, this number might vary widely, even for a given graph, depending on properties of the representation; but we can still characterize different algorithms. For example, Figures 22.23 and 22.24 show the search trees for the maximumcapacity and shortestpath algorithms, respectively. These examples help support the general conclusion that the shortestpath method expends more effort to find augmenting paths with less flow than the maximumcapacity algorithm, thus helping to explain why the latter is preferred. Screenshot Maximumcapacity augmenting paths (larger example)This figure depicts the augmenting paths computed by the maximumcapacity algorithm for the Euclidean network with random weights that is shown in Screenshot, along with the edges in the graphsearch spanning tree (in gray). The resulting flow is shown at the bottom right. Screenshot Shortest augmenting paths (larger example)This figure depicts the augmenting paths computed by the shortestpaths algorithm for the Euclidean network with random weights that is shown in Screenshot, along with the edges in the graphsearch spanning tree (in gray). In this case, this algorithm is much slower than the maximumcapacity algorithm depicted in Screenshot, both because it requires a large number of augmenting paths (the paths shown are just the first 12 out of a total of 37) and because the spanning trees are larger (usually containing nearly all of the vertices). Perhaps the most important lesson that we can learn from studying particular networks in detail in this way is that the gap between the upper bounds of Properties 22.6 through 22.8 and the actual number of augmenting paths that the algorithms need for a given app might be enormous. For example, the flow network illustrated in Screenshot has 177 vertices and 2000 edges of capacity less than 100, so the value of the quantity 2E lg M in Property 22.8 is more than 25,000; but the maximumcapacity algorithm finds the maxflow with only seven augmenting paths. Similarly, the value of the quantity VE/2 in Property 22.7 for this network is 177,000, but the shortestpath algorithm needs only 37 paths. As we have already indicated, the relatively low node degree and the locality of the connections partially explain these differences between theoretical and actual performance in this case. We can prove more accurate performance bounds that account for such details; but such disparities are the rule, not the exception, in flownetwork models and in practical networks. On the one hand, we might take these results to indicate that these networks are not sufficiently general to represent the networks that we encounter in practice; on the other hand, perhaps the worstcase analysis is more removed from practice than these kinds of networks. Large gaps like this certainly provide strong motivation for researchers seeking to lower the worstcase bounds. There are many other possible implementations of augmentingpath algorithms to consider that might lead to better worstcase performance or better practical performance than the methods that we have considered (see Exercises 22.56 through 22.60). Numerous methods that are more sophisticated and have been shown to have improved worstcase performance can be found in the research literature (see reference section). Another important complication follows when we consider the numerous other problems that reduce to the maxflow problem. When such reductions apply, the flow networks that result may have some special structure that some particular algorithm may be able to exploit for improved performance. For example, in we will examine a reduction that gives flow networks with unit capacities on all edges. Even when we restrict attention just to augmentingpath algorithms, we see that the study of maxflow algorithms is both an art and a science. The art lies in picking the strategy that is most effective for a given practical situation; the science lies in understanding the essential nature of the problem. Are there new data structures and algorithms that can solve the maxflow problem in linear time, or can we prove that none exist? In , we see that no augmentingpath algorithm can have linear worstcase performance, and we examine a different generic family of algorithms that might. Exercises

Previous Next 