Our study of graph algorithms appropriately culminates in the study of network-flow algorithms for four reasons. First, the network-flow model validates the practical utility of the graph abstraction in countless apps. Second, the maxflow and mincost-flow algorithms that we have examined are natural extensions of graph algorithms that we studied for simpler problems. Third, the implementations exemplify the important role of fundamental algorithms and data structures in achieving good performance. Fourth, the maxflow and mincost-flow models illustrate the utility of the approach of developing increasingly general problem-solving models and using them to solve broad classes of problems. Our ability to develop efficient algorithms that solve these problems leaves the door open for us to develop more general models and to seek algorithms that solve those problems. Before considering these issues in further detail, we develop further context by listing important problems that we have not covered in this chapter, even though they are closely related to familiar problems. Maximum matching In a graph with edge weights, find a subset of edges in which no vertex appears more than once and whose total weight is such that no other such set of edges has a higher total weight. With unit edge weights, the maximum-cardinality matching problem in unweighted graphs immediately reduces to this problem. The assignment problem and maximum-cardinality bipartite-matching problems reduce to maximum matching for general graphs. On the other hand, maximum matching does not reduce to mincost flow, so the algorithms that we have considered do not apply. The problem is tractable, although the computational burden of solving it for huge graphs remains significant. Treating the many techniques that have been tried for matching on general graphs would fill an entire volume: The problem is one of those studied most extensively in graph theory. We have drawn the line in this tutorial at mincost flow, but we revisit maximum matching in Part 8. Multicommodity flow Suppose that we need to compute a second flow such that the sum of an edge's two flows is limited by that edge's capacity, both flows are in equilibrium, and the total cost is minimized. This change models the presence of two different types of material in the merchandise-distribution problem; for example, should we put more hamburger or more potatoes in the truck bound for the fast-food restaurant? This change also makes the problem much more difficult and requires more advanced algorithms than those considered here; for example, no analogue to the maxflow–mincut theorem is known to hold for the general case. Formulating the problem as an LP problem is a straightforward extension of the example shown in Screenshot, so the problem is tractable (because LP is tractable). Convex and nonlinear costs The simple cost functions that we have been considering are linear combinations of variables, and our algorithms for solving them depend in an essential way on the simple mathematical structure underlying these functions. Many apps call for more complicated functions. For example, when we minimize distances, we are led to sums of squares of costs. Such problems cannot be formulated as LP problems, so they require problem-solving models that are even more powerful. Many such problems are not tractable. Scheduling We have presented a few scheduling problems as examples. They are barely representative of the hundreds of different scheduling problems that have been posed. The research literature is replete with the study of relationships among these problems and the development of algorithms and implementations to solve the problems (see reference section). Indeed, we might have chosen to use scheduling rather than network-flow algorithms to develop the idea for defining general problem-solving models and implementing reductions to solve particular problems (the same might be said of matching). Many scheduling problems reduce to the mincost-flow model. The scope of combinatorial computing is vast indeed, and the study of problems of this sort is certain to occupy researchers for many years to come. We revisit many of these problems in Part 8, in the context of coping with intractability. We have presented only a fraction of the studied algorithms that solve maxflow and mincost-flow problems. As indicated in the exercises throughout this chapter, combining the many options available for different parts of various generic algorithms leads to a large number of different algorithms. Algorithms and data structures for basic computational tasks play a significant role in the efficacy of many of these approaches; indeed, some of the important general-purpose algorithms that we have studied were developed in the quest for efficient implementations of network-flow algorithms. This topic is still being studied by many researchers. The development of better algorithms for network-flow problems certainly depends on intelligent use of basic algorithms and data structures. The broad reach of network-flow algorithms and our extensive use of reductions to extend this reach makes this section an appropriate place to consider some implications of the concept of reduction. For a large class of combinatorial algorithms, these problems represent a watershed in our studies of algorithms, where we stand between the study of efficient algorithms for particular problems and the study of general problem-solving models. There are important forces pulling in both directions. We are drawn to develop as general a model as possible, because the more general the model, the more problems it encompasses, thereby making more attractive efficient algorithms that can solve problems that reduce to the model. Developing such algorithms may be a significant challenge. We may seek algorithms that are guaranteed to be reasonably efficient, or we may be satisfied with algorithms that perform well for specific classes of problems that are of interest. When specific analytic results are elusive, we may have persuasive empirical evidence. Indeed, practitioners typically will try the most general model available (or one that has a well-developed solution package) and will look no further if algorithms for the model work in reasonable time for the problem at hand. Still, we should strive to avoid using overly general models that lead us to spend excessive amounts of time solving problems for which more specialized models can be effective. We are also drawn to seek better algorithms for important specific problems, particularly for huge problems or huge numbers of instances of smaller problems where computational resources are a critical bottleneck. As we have seen for numerous examples throughout this tutorial and in Parts 1 through 4, we often can find a clever algorithm that can reduce resource costs by factors of hundreds or thousands or more, which is extremely significant if we are measuring costs in hours or dollars. The general outlook described in Chapter 2, which we have used successfully in so many domains, remains extremely valuable in such situations, and we can look forward to the development of clever algorithms throughout the spectrum of graph algorithms and combinatorial algorithms. Perhaps the most important drawback to depending too heavily on a specialized algorithm is that often a small change to the model will invalidate the algorithm. When we use an overly general model and an algorithm that gets our problem solved, we are less vulnerable to this defect. Software libraries that encompass the algorithms that we have addressed may be found in many coding environments. Such libraries certainly are important resources to consider for specific problems. However, libraries may be difficult to use, obsolete, or poorly matched to the problem at hand. Experienced programmers know the importance of considering the trade-off between taking advantage of a library resource and becoming overly dependent on that resource (if not subject to premature obsolescence). Some of the implementations that we have considered are efficient, simple to develop, and broad in scope. Adapting and tuning such implementations to address problems at hand can be the proper approach in many situations. The tension between theoretical studies that are restricted to what we can prove and empirical studies that are relevant to only the problems at hand becomes ever more pronounced as the difficulty of the problems that we address increases. The theory provides the guidance that we need to gain a foothold on the problem, and practical experience provides the guidance that we need to develop implementations. Moreover, experience with practical problems suggests new directions for the theory, perpetuating the cycle that expands the class of practical problems that we can solve. Ultimately, whichever approach we pursue, the goal is the same: We want a broad spectrum of problem-solving models, effective algorithms for solving problems within those models, and efficient implementations of those algorithms that can handle practical problems. The development of increasingly general problem-solving models (such as the shortest paths, maxflow, and mincost-flow problems), the increasingly powerful generic algorithms (such as the Bellman–Ford algorithm for the shortest-paths problem, the augmenting-path algorithm for the maxflow problem, and the network simplex algorithm for the mincost-maxflow problem) brought us a long way towards the goal. Much of this work was done in the 1950s and 1960s. The subsequent emergence of fundamental data structures (Parts 1 through 4) and of algorithms that provide effective implementations of these generic methods (this tutorial) has been an essential force leading to our current ability to solve such a large class of huge problems.