Previous Next 
Network Simplex AlgorithmThe running time of the cyclecanceling algorithm is based on not just the number of negativecost cycles that the algorithm uses to reduce the flow cost but also the time that the algorithm uses to find each of the cycles. In this section, we consider a basic approach that both dramatically decreases the cost of identifying negative cycles and admits methods for reducing the number of iterations. This implementation of the cyclecanceling algorithm is known as the network simplex algorithm. It is based on maintaining a tree data structure and reweighting costs such that negative cycles can be identified quickly. To describe the network simplex algorithm, we begin by noting that, with respect to any flow, each network edge uv is in one of three states (see Screenshot): Screenshot Edge classificationWith respect to any flow, every edge is either empty, full, or partial (neither empty nor full). In this flow, edge 14 is empty; edges 02, 23, 24, 35, and 45 are full; and edges 01 and 13 are partial. Our conventions in figures give two ways to identify an edge's state: In the flow column, 0 entries are empty edges, starred entries are full edges, and entries that are neither 0 nor starred are partial edges. In the residual network (bottom), empty edges appear only in the left column, full edges appear only in the right column, and partial edges appear in both columns.
This classification is familiar from our use of residual networks throughout this chapter. If uv is an empty edge, then uv is in the residual network, but vu is not; if uv is a full edge, then vu is in the residual network, but uv is not; if uv is a partial edge, then both uv and vu are in the residual network. Definition 22.9 Given a maxflow with no cycle of partial edges, a feasible spanning tree for the maxflow is any spanning tree of the network that contains all the flow's partial edges. In this context, we ignore edge directions in the spanning tree. That is, any set of V – 1 directed edges that connects the network's V vertices (ignoring edge directions) constitutes a spanning tree, and a spanning tree is feasible if all nontree edges are either full or empty. The first step in the network simplex algorithm is to build a spanning tree. One way to build it is to compute a maxflow, to break cycles of partial edges by augmenting along each cycle to fill or empty one of its edges, then to add full or empty edges to the remaining partial edges to make a spanning tree. An example of this process is illustrated in Screenshot. Another option is to start with the maxflow in a dummy edge from source to sink. Then, this edge is the only possible partial edge, and we can build a spanning tree for the flow with any graph search. An example of such a spanning tree is illustrated in Screenshot. Screenshot Maxflow spanning treeGiven any maxflow (top), we can construct a maxflow that has a spanning tree such that no nontree edges are partial by the twostep process illustrated in this example. First, we break cycles of partial edges; in this case, we break the cycle 02410 by pushing 1 unit of flow along it. We can always fill or empty at least one edge in this way; in this case, we empty 14 and fill both 02 and 24 (center). Second, we add empty or full edges to the set of partial edges to make a spanning tree; in this case, we add 02, 14 and 35 (bottom). Screenshot Spanning tree for dummy maxflowIf we start with flow on a dummy edge from source to sink, then that is the only possible partial edge, so we can use any spanning tree of the remaining nodes to build a spanning tree for the flow. In the example, the edges 05, 01, 02, 13, and 14 comprise a spanning tree for the initial maxflow. All of the nontree edges are empty. Now, adding any nontree edge to a spanning tree creates a cycle. The basic mechanism behind the network simplex algorithm is a set of vertex weights that allows immediate identification of the edges that, when added to the tree, create negativecost cycles in the residual network. We refer to these vertex weights as potentials and use the notation f(v) to refer to the potential associated with v. Depending on context, we refer to potentials as a function defined on the vertices, or as a set of integer weights with the implicit assumption that one is assigned to each vertex, or as a vertexindexed array (since we always store them that way in implementations). Definition 22.10 Given a flow in a flow network with edge costs, let c(u, v) denote the cost of uv in the flow's residual network. For any potential function f, the reduced cost of an edge uv in the residual network with respect to f, which we denote by c^{*}(u, v), is defined to be the value c(u, v) – (f(u) – f(v)). In other words, the reduced cost of every edge is the difference between that edge's actual cost and the difference of the potentials of the edge's vertices. In the merchandise distribution app, we can see the intuition behind node potentials: If we interpret the potential f(u) as the cost of buying a unit of material at node u, the full cost c^{*}(u, v) + f(u) – f(v) is the cost of buying at u, shipping to v and selling at v. We maintain a vertexindexed array phi for the vertex potentials and compute the reduced cost of an edge vw by subtracting from the edge's cost the value (phi[v]  phi[w]). That is, there is no need to store the reduced edge costs anywhere, because it is so easy to compute them. In the network simplex algorithm, we use feasible spanning trees to define vertex potentials such that reduced edge costs with respect to those potentials give direct information about negativecost cycles. Specifically, we maintain a feasible spanning tree throughout the execution of the algorithm, and we set the values of the vertex potentials such that all tree edges have reduced cost zero. Property 22.25 We say that a set of vertex potentials is valid with respect to a spanning tree if all tree edges have zero reduced cost. All valid vertex potentials for any given spanning tree imply the same reduced costs for each network edge. Proof: Given two different potential functions f and f' that are both valid with respect to a given spanning tree, we show that they differ by an additive constant: That f(u) = f'(u) + D for all u and some constant D. Then, f(u) – f(v) = f'(u) – f'(v) for all u and v, implying that all reduced costs are the same for the two potential functions. For any two vertices u and v that are connected by a tree edge, we must have f(v) = f(u) – c(u, v), by the following argument: If uv is a tree edge, then f(v) must be equal to f(u) – c(u, v) to make the reduced cost c(u, v) – f(u) + f(v) equal to zero; if vu is a tree edge, then f(v) must be equal to f(u) + c(v, u) = f(u) – c(u, v) to make the reduced cost c(v, u) – f(v) + f(u) equal to zero. The same argument holds for f', so we must also have f'(v) = f'(u) – c(u, v). Subtracting, we find that f(v) – f'(v) = f(u) – f'(u) for any u and v connected by a tree edge. Denoting this difference by D for any vertex and applying the equality along the edges of any search tree of the spanning tree immediately gives the desired result that f(u) = f'(u) + D for all u. Another way to imagine the process of defining a set of valid vertex potentials is that we start by fixing one value, then compute the values for all vertices connected to that vertex by tree edges, then compute them for all vertices connected to those vertices, and so forth. No matter where we start the process, the potential difference between any two vertices is the same, determined by the structure of the tree. Screenshot depicts an example. We consider the details of the task of computing potentials after we examine the relationship between reduced costs on nontree edges and negativecost cycles. Screenshot Vertex potentialsVertex potentials are determined by the structure of the spanning tree and by an initial assignment of a potential value to any vertex. At left is a set of edges that comprise a spanning tree of the ten vertices 0 through 9. In the center is a representation of that tree with 5 at the root, vertices connected to 5 one level lower, and so forth. When we assign the root the potential value 0, there is a unique assignment of potentials to the other nodes that make the difference between the potentials of each edge's vertices equal to its cost. At right is a different representation of the same tree with 0 at the root. The potentials that we get by assigning 0 the value 0 differ from those in the center by a constant offset. All our computations use the difference between two potentials: This difference is the same for any pair of potentials no matter what vertex we start with (and no matter what value we assign it), so our choice of starting vertex and value is immaterial. Property 22.26 We say that a nontree edge is eligible if the cycle that it creates with tree edges is a negativecost cycle in the residual network. An edge is eligible if and only if it is a full edge with positive reduced cost or an empty edge with negative reduced cost. Proof: Suppose that the edge uv creates the cycle t1t2t3...tdt1 with tree edges t1t2, t2t3, ..., where v is t1 and u is td. The reduced cost definitions of each edge imply the following: The lefthand side of the sum of these equations gives the total cost of the cycle, and the righthand side collapses to c^{*}(u, v). In other words, the edge's reduced cost gives the cycle cost, so only the edges described can give a negativecost cycle Property 22.27 If we have a flow and a feasible spanning tree with no eligible edges, the flow is a mincost flow. Proof: If there are no eligible edges, then there are no negativecost cycles in the residual network, so the optimality condition of Property 22.23 implies that the flow is a mincost flow. An equivalent statement is that if we have a flow and a set of vertex potentials such that reduced costs of tree edges are all zero, full nontree edges are all nonnegative, and empty nontree edges are all nonpositive, then the flow is a mincost flow. If we have eligible edges, we can choose one and augment along the cycle that it creates with tree edges to get a lowercost flow. As we did with the cyclecanceling implementation in , we go through the cycle to find the maximum amount of flow that we can push, then go through the cycle again to push that amount, which fills or empties at least one edge. If that is the eligible edge that we used to create the cycle, it becomes ineligible (its reduced cost stays the same, but it switches from full to empty or empty to full). Otherwise, it becomes partial. By adding it to the tree and removing a full edge or an empty edge from the cycle, we maintain the invariant that no nontree edges are partial and that the tree is a feasible spanning tree. Again, we consider the mechanics of this computation later in this section. In summary, feasible spanning trees give us vertex potentials, which give us reduced costs, which give us eligible edges, which give us negativecost cycles. Augmenting along a negativecost cycle reduces the flow cost and also implies changes in the tree structure. Changes in the tree structure imply changes in the vertex potentials; changes in the vertex potentials imply changes in reduced edge costs; and changes in reduced costs imply changes in the set of eligible edges. After making all these changes, we can pick another eligible edge and start the process again. This generic implementation of the cyclecanceling algorithm for solving the mincostflow problem is called the network simplex algorithm:
This implementation is a generic one because the initial choice of spanning tree, the method of maintaining vertex potentials, and the method of choosing eligible edges are not specified. The strategy for choosing eligible edges determines the number of iterations, which trades off against the differing costs of implementing various strategies and recalculating the vertex potentials. Property 22.28 If the generic network simplex algorithm terminates, it computes a mincost flow. Proof: If the algorithm terminates, it does so because there are no negativecost cycles in the residual network, so by Property 22.23 the maxflow is of mincost. The condition that the algorithm might not terminate derives from the possibility that augmenting along a cycle might fill or empty multiple edges, thus leaving edges in the tree through which no flow can be pushed. If we cannot push flow, we cannot reduce the cost, and we could get caught in an infinite loop adding and removing edges to make a fixed sequence of spanning trees. Several ways to avoid this problem have been devised; we discuss them later in this section after we look in more detail at implementations. The first choice that we face in developing an implementation of the network simplex algorithm is what representation to use for the spanning tree. We have three primary computational tasks that involve the tree:
Each of these tasks is an interesting exercise in data structure and algorithm design. There are several data structures and numerous algorithms that we might consider, with varied performance characteristics. We begin by considering perhaps the simplest available data structure—which we first encountered in Chapter 1 (!)—the parentlink tree representation. After we examine algorithms and implementations that are based on the parentlink representation for the tasks just listed and describe their use in the context of the network simplex algorithm, we discuss alternative data structures and algorithms. As we did in several other implementations in this chapter, beginning with the augmentingpath maxflow implementation, we keep links into the network representation, rather than simple indices in the tree representation, to allow us to have access to the flow values that need to be changed without losing access to the vertex name. Program 22.10 is an implementation that assigns vertex potentials in time proportional to V. It is based on the following idea, also illustrated in Screenshot. We start with any vertex and recursively compute the potentials of its ancestors, following parent links up to the root, to which, by convention, we assign potential 0. Then, we pick another vertex and use parent links to compute recursively the potentials of its ancestors. The recursion terminates when we reach an ancestor whose potential is known; then, on the way out of the recursion, we travel back down the path, computing each node's potential from its parent. We continue this process until we have computed all potential values. Once we have traveled along a path, we do not revisit any of its edges, so this process runs in time proportional to V. Screenshot Computing potentials through parent linksWe start at 0, follow the path to the root, set pt[5] to 0, then work down the path, first setting 6 to make pt[6]  pt[5] equal to the cost of 65, then setting p[3] to make p[3]  p[6] equal to the cost of 36, and so forth (left). Then we start at 1 and follow parent links until hitting a vertex whose potential is known (6 in this case) and work down the path to compute potentials on 9 and 1 (center). When we start at 2, we can compute its potential from its parent (right); when we start at 3, we see that its potential is already known, and so forth. In this example, when we try each vertex after 1, we either find that its potential is already done or we can compute the value from its parent. We never retrace an edge, no matter what the tree structure, so the total running time is linear.
Given two nodes in a tree, their least common ancestor (LCA) is the root of the smallest subtree that contains them both. The cycle that we form by adding an edge connecting two nodes consists of that edge plus the edges on the two paths from the two nodes to their LCA. The augmenting cycle formed by adding vw goes through vw to w, up the tree to the LCA of v and w (say r), then down the tree to v, so we have to consider edges in opposite orientation in the two paths. As before, we augment along the cycle by traversing the paths once to find the maximum amount of flow that we can push through their edges then traversing the two paths again, pushing flow through them. We do not need to consider edges in order around the cycle; it suffices to consider them all (in either direction). Accordingly, we can simply follow paths from each of the two nodes to their LCA. To augment along the cycle formed by the addition of vw, we push flow from v to w; from v along the path to the LCA r; and from w along the path to r, but in reverse direction for each edge. Program 22.11 is an implementation of this idea, in the form of a method that augments a cycle and also returns an edge that is emptied or filled by the augmentation. The implementation in Program 22.11 uses a simple technique to avoid paying the cost of initializing all the marks each time that we call it. We maintain the marks as global variables, initialized to zero. Each time that we seek an LCA, we increment a global counter and mark vertices by setting their corresponding entry in a vertexindexed array to that counter. After initialization, this technique allows us to perform the computation in time proportional to the length of the cycle. In typical problems, we might augment along a large number of small cycles, so the time saved can be substantial. As we learn, the same technique is useful in saving time in other parts of the implementation. Our third treemanipulation task is to substitute an edge uv for another edge in the cycle that it creates with tree edges. Program 22.12 is an implementation of a method that accomplishes this task for the parentlink representation. Again, the LCA of u and v is important, because the edge to be removed is either on the path from u to the LCA or on the path from v to the LCA. Removing an edge detaches all its descendants from the tree, but we can repair the damage by reversing the links between uv and the removed edge, as illustrated in Screenshot. Screenshot Spanning tree substitutionThis example illustrates the basic treemanipulation operation in the network simplex algorithm for the parentlink representation. At left is a sample tree with links all pointing upwards, as indicated by the parentlink structure ST. (In our code, the function ST computes the parent of the given vertex from the edge pointer in the vertexindexed vector st.) Adding the edge 12 creates a cycle with the paths from 1 and 2 to their LCA, 11. If we then delete one of those edges, say 03, the structure remains a tree. To update the parentlink array to reflect the change, we switch the directions of all the links from 2 up to 3 (center). The tree at right is the same tree with node positions changed so that links all point up, as indicated by the parentlink array that represents the tree (bottom right). These three implementations support the basic operations underlying the network simplex algorithm: We can choose an eligible edge by examining reduced costs and flows; we can use the parentlink representation of the spanning tree to augment along the negative cycle formed with tree edges and the chosen eligible edge; and we can update the tree and recalculate potentials. These operations are illustrated for an example flow network in Figures 22.49 and 22.50. Screenshot Network simplex initializationTo initialize the data structures for the network simplex algorithm, we start with zero flow on all edges (left), then add a dummy edge 05 from source to sink with flow no less than the maxflow value (for clarity, we use a value equal to the maxflow value here). The cost value 9 for the dummy edge is greater than the cost of any cycle in the network; in the implementation, we use the value CV. The dummy edge is not shown in the flow network, but it is included in the residual network (center). We initialize the spanning tree with the sink at the root, the source as its only child, and a search tree of the graph induced by the remaining nodes in the residual network. The implementation uses the parentedge representation of the tree in the array st and the related parentvertex function ST; our figures depict this representation and two others: the rooted representation shown on the right and the set of shaded edges in the residual network. The vertex potentials are in the pt array and are computed from the tree structure so as to make the difference of a tree edge's vertex potentials equal to its cost. The column labeled costR in the center gives the reduced costs for nontree edges, which are computed for each edge by adding the difference of its vertex potentials to its cost. Reduced costs for tree edges are zero and left blank. Empty edges with negative reduced cost and full edges with positive reduced cost (eligible edges) are marked with asterisks. Screenshot Residual networks and spanning trees (network simplex)Each row in this figure corresponds to an iteration of the network simplex algorithm following the initialization depicted in Screenshot: On each iteration, it chooses an eligible edge, augments along a cycle, and updates the data structures as follows: First, the flow is augmented, including implied changes in the residual network. Second, the tree structure ST is changed by adding an eligible edge and deleting an edge on the cycle that it makes with tree edges. Third, the table of potentials phi is updated to reflect the changes in the tree structure. Fourth, the reduced costs of the nontree edges (column marked costR in the center) are updated to reflect the potential changes, and these values used to identify empty edges with negative reduced cost and full edges with positive reduced costs as eligible edges (marked with asterisks on reduced costs). Implementations need not actually make all these computations (they only need to compute potential changes and reduced costs sufficient to identify an eligible edge), but we include all the numbers here to provide a full illustration of the algorithm. The final augmentation for this example is degenerate. It does not increase the flow, but it leaves no eligible edges, which guarantees that the flow is a mincost maxflow.
Screenshot illustrates initialization of the data structures using a dummy edge with the maxflow on it, as in Screenshot. Shown there are an initial feasible spanning tree with its parentlink representation, the corresponding vertex potentials, the reduced costs for the nontree edges, and the initial set of eligible edges. Also, rather than computing the maxflow value in the implementation, we use the outflow from the source, which is guaranteed to be no less than the maxflow value; we use the maxflow value here to make the operation of the algorithm easier to trace. Screenshot illustrates the changes in the data structures for each of a sequence of eligible edges and augmentations around negativecost cycles. The sequence does not reflect any particular method for choosing eligible edges; it represents the choices that make the augmenting paths the same as depicted in Screenshot. These figures show all vertex potentials and all reduced costs after each cycle augmentation, even though many of these numbers are implicitly defined and are not necessarily computed explicitly by typical implementations. The purpose of these two figures is to illustrate the overall progress of the algorithm and the state of the data structures as the algorithm moves from one feasible spanning tree to another simply by adding an eligible edge and removing a tree edge on the cycle that is formed.
One critical fact that is illustrated in the example in Screenshot is that the algorithm might not even terminate, because full or empty edges on the spanning tree can stop us from pushing flow along the negative cycle that we identify. That is, we can identify an eligible edge and the negative cycle that it makes with spanning tree edges, but the maximum amount of flow that we can push along the cycle may be 0. In this case, we still substitute the eligible edge for an edge on the cycle, but we make no progress in reducing the cost of the flow. To ensure that the algorithm terminates, we need to prove that we cannot end up in an endless sequence of zeroflow augmentations. If there is more than one full or empty edge on the augmenting cycle, the substitution algorithm in Program 22.12 always deletes from the tree the one closest to the LCA of the eligible edge's two vertices. Fortunately, it has been shown that this particular strategy for choosing the edge to remove from the cycle suffices to ensure that the algorithm terminates (see reference section). The final important choice that we face in developing an implementation of the network simplex algorithm is a strategy for identifying eligible edges and choosing one to add to the tree. Should we maintain a data structure containing eligible edges? If so, how sophisticated a data structure is appropriate? The answer to these questions depends somewhat on the app and the dynamics of solving particular instances of the problem. If the total number of eligible edges is small, then it is worthwhile to maintain a separate data structure; if most edges are eligible most of the time, it is not. Maintaining a separate data structure could spare us the expense of searching for eligible edges, but also could require costly update computations. What criteria should we use to pick from among the eligible edges? Again, there are many that we might adopt. We consider examples in our implementations, then discuss alternatives. For example, Program 22.13 illustrates a method that finds an eligible edge of minimal reduced cost: No other edge gives a cycle for which augmenting along the cycle would lead to a greater reduction in the total cost. Program 22.14 is a full implementation of the network simplex algorithm that uses the strategy of choosing the eligible edge giving a negative cycle whose cost is highest in absolute value. The implementation uses the treemanipulation methods and the eligibleedge search of Programs 22.10 through 22.13, but the comments that we made regarding our first cyclecanceling implementation (Program 22.9) apply:
It is remarkable that such a simple piece of code is sufficiently powerful to provide useful solutions in the context of a general problemsolving model with the reach of the mincostflow problem. The worstcase performance bound for Program 22.14 is at least a factor of V lower than that for the cyclecanceling implementation in Program 22.9, because the time per iteration is just E (to find the eligible edge) rather than VE (to find a negative cycle). Although we might suspect that using the maximum augmentation will result in fewer augmentations than just taking the first negative cycle provided by the Bellman–Ford algorithm, that suspicion has not been proved valid. Specific bounds on the number of augmenting cycles used are difficult to develop, and, as usual, these bounds are far higher than the numbers that we see in practice. As mentioned earlier, there are theoretical results demonstrating that certain strategies can guarantee that the number of augmenting cycles is bounded by a polynomial in the number of edges, but practical implementations typically admit an exponential worst case.
In light of these considerations, there are many options to consider in pursuit of improved performance. For example, Program 22.15 is another implementation of the network simplex algorithm. The straightforward implementation in Program 22.14 always takes time proportional to V to revalidate the tree potentials and always takes time proportional to E to find the eligible edge with the largest reduced cost. The implementation in Program 22.15 is designed to eliminate both of these costs in typical networks. First, even if choosing the maximum edge leads to the fewest number of iterations, expending the effort of examining every edge to find the maximum edge may not be worthwhile. We could do numerous augmentations along short cycles in the time that it takes to scan all the edges. Accordingly, it is worthwhile to consider the strategy of using any eligible edge, rather than taking the time to find a particular one. In the worst case, we might have to examine all the edges or a substantial fraction of them to find an eligible edge, but we typically expect to need to examine relatively few edges to find an eligible one. One approach is to start from the beginning each time; another is to pick a random starting point (see Exercise 22.126). This use of randomness also makes an artificially long sequence of augmenting paths unlikely. Second, we adopt a lazy approach to computing potentials. Rather than compute all the potentials in the vertexindexed array phi, then refer to them when we need them, we call the method phiR to get each potential value; it travels up the tree to find a valid potential, then computes the necessary potentials on that path. To implement this approach, we simply change the method that defines the cost to use the method invocation phiR(u), instead of the array access phi[u]. In the worst case, we calculate all the potentials in the same way as before; but if we examine only a few eligible edges, then we calculate only those potentials that we need to identify them. Such changes do not affect the worstcase performance of the algorithm, but they certainly speed it up in practical apps. Several other ideas for improving the performance of the network simplex algorithm are explored in the exercises (see Exercises 22.126 through 22.130); those represent only a small sample of the ones that have been proposed.
As we have emphasized throughout this tutorial, the task of analyzing and comparing graph algorithms is complex. With the network simplex algorithm, the task is further complicated by the variety of different implementation approaches and the broad array of types of apps that we might encounter (see ). Which implementation is best? Should we compare implementations based on the worstcase performance bounds that we can prove? How accurately can we quantify performance differences of various implementations for specific apps. Should we use multiple implementations, each tailored to specific apps? Readers are encouraged to gain computational experience with various network simplex implementations and to address some of these questions by running empirical studies of the kind that we have emphasized throughout this tutorial. When seeking to solve mincost flow problems, we are faced with the familiar fundamental challenges, but the experience that we have gained in tackling increasingly difficult problems throughout this tutorial provides ample background to develop efficient implementations that can effectively solve a broad variety of important practical problems. Some such studies are described in the exercises at the end of this and the next section, but these exercises should be viewed as a starting point. Each reader can craft a new empirical study that sheds light on some particular implementation/app pair of interest. The potential to improve performance dramatically for critical apps through proper deployment of classic data structures and algorithms (or development of new ones) for basic tasks makes the study of network simplex implementations a fruitful research area, and there is a large literature on network simplex implementations. In the past, progress in this research has been crucial, because it helps reduce the huge cost of solving network simplex problems. People tend to rely on carefully crafted libraries to attack these problems, and that is still appropriate in many situations. However, it is difficult for such libraries to keep up with recent research and to adapt to the variety of problems that arise in new apps. With the speed and size of modern computers, accessible implementations like Programs 22.12 and 22.13 can be starting points for the development of effective problemsolving tools for numerous apps. Exercises

Previous Next 