Previous Next 
ReductionIt turns out that shortestpaths problems—particularly the general case, where negative weights are allowed (the topic of )—represent a general mathematical model that we can use to solve a variety of other problems that seem unrelated to graph processing. This model is the first among several such general models that we encounter. As we move to more difficult problems and increasingly general models, one of the challenges that we face is to characterize precisely relationships among various problems. Given a new problem, we ask whether we can solve it easily by transforming it to a problem that we know how to solve. If we place restrictions on the problem, will we be able to solve it more easily? To help answer such questions, we digress briefly in this section to discuss the technical language that we use to describe these types of relationships among problems. Definition 21.3 We say that a problem A reduces to another problem B if we can use an algorithm that solves B to develop an algorithm that solves A, in a total amount of time that is, in the worst case, no more than a constant times the worstcase running time of the algorithm that solves B. We say that two problems are equivalent if they reduce to each other. We postpone until Part 8 a rigorous definition of what it means to "use" one algorithm to "develop" another. For most apps, we are content with the following simple approach. We show that A reduces to B by demonstrating that we can solve any instance of A in three steps:
As long as we can perform the transformations (and solve B) efficiently, we can solve A efficiently. To illustrate this proof technique, we consider two examples. Property 21.12 The transitiveclosure problem reduces to the allpairs shortestpaths problem with nonnegative weights. Proof: We have already pointed out the direct relationship between Warshall's algorithm and Floyd's algorithm. Another way to consider that relationship, in the present context, is to imagine that we need to compute the transitive closure of digraphs using a library class that computes all shortest paths in networks. To do so, we add selfloops if they are not present in the digraph; then, we build a network directly from the adjacency matrix of the digraph, with an arbitrary weight (say 0.1) corresponding to each 1 and the sentinel weight corresponding to each 0. Then, we invoke the allpairs shortestpaths method. Next, we can easily compute the transitive closure from the allpairs shortestpaths matrix that the method computes: Given any two vertices u and v, there is a path from u to v in the digraph if and only if the length of the path from u to v in the network is nonzero (see Screenshot). Screenshot Transitiveclosure reductionGiven a digraph (left), we can transform its adjacency matrix (with selfloops) into an adjacency matrix representing a network by assigning an arbitrary weight to each edge (left matrix). As usual, blank entries in the matrix represent a sentinel value that indicates the absence of an edge. Given the allpairs shortestpathslengths matrix of that network (center matrix), the transitive closure of the digraph (right matrix) is simply the matrix formed by substituting 0 for each sentinel and 1 for all other entries. This property is a formal statement that the transitiveclosure problem is no more difficult than the allpairs shortestpaths problem. Since we happen to know algorithms for transitive closure that are even faster than the algorithms that we know for allpairs shortestpaths problems, this information is no surprise. Reduction is more interesting when we use it to establish a relationship between problems that we do not know how to solve, or between such problems and other problems that we can solve. Property 21.13 In networks with no constraints on edge weights, the longestpath and shortestpath problems (singlesource or allpairs) are equivalent. Proof: Given a shortestpath problem, negate all the weights. A longest path (a path with the highest weight) in the modified network is a shortest path in the original network. An identical argument shows that the shortestpath problem reduces to the longestpath problem. This proof is trivial, but this property also illustrates that care is justified in stating and proving reductions, because it is easy to take reductions for granted and thus to be misled. For example, it is decidedly not true that the longestpath and shortestpath problems are equivalent in networks with nonnegative weights. At the beginning of this chapter, we outlined an argument that shows that the problem of finding shortest paths in undirected weighted graphs reduces to the problem of finding shortest paths in networks, so we can use our algorithms for networks to solve shortestpaths problems in undirected weighted graphs. Two further points about this reduction are worth contemplating in the present context. First, the converse does not hold: Knowing how to solve shortestpaths problems in undirected weighted graphs does not help us to solve them in networks. Second, we saw a flaw in the argument: If edge weights could be negative, the reduction gives networks with negative cycles, and we do not know how to find shortest paths in such networks. Even though the reduction fails, it turns out to be still possible to find shortest paths in undirected weighted graphs with no negative cycles with an unexpectedly complicated algorithm (see reference section). Since this problem does not reduce to the directed version, this algorithm does not help us to solve the shortestpath problem in general networks. The concept of reduction essentially describes the process of using one ADT to implement another, as is done routinely by modern systems programmers. If two problems are equivalent, we know that if we can solve either of them efficiently, we can solve the other efficiently. We often find simple onetoone correspondences, such as the one in Property 21.13, that show two problems to be equivalent. In this case, we have not yet discussed how to solve either problem, but it is useful to know that if we could find an efficient solution to one of them, we could use that solution to solve the other one. We saw another example in : When faced with the problem of determining whether or not a graph has an odd cycle, we noted that the problem is equivalent to determining whether or not the graph is twocolorable. Reduction has two primary apps in the design and analysis of algorithms. First, it helps us to classify problems according to their difficulty at an appropriate abstract level without necessarily developing and analyzing full implementations. Second, we often do reductions to establish lower bounds on the difficulty of solving various problems to help indicate when to stop looking for better algorithms. We have seen examples of these uses in s 19.3 and 20.7; we see others later in this section. Beyond these direct practical uses, the concept of reduction also has widespread and profound implications for the theory of computation; these implications are important for us to understand as we tackle increasingly difficult problems. We discuss this topic briefly at the end of this section and consider it in full formal detail in Part 8. The constraint that the cost of the transformations should not dominate is a natural one and often applies. In many cases, however, we might choose to use reduction even when the cost of the transformations does dominate. One of the most important uses of reduction is to provide efficient solutions to problems that might otherwise seem intractable by performing a transformation to a wellunderstood problem that we know how to solve efficiently. Reducing A to B, even if computing the transformations is much more expensive than is solving B, may give us a much more efficient algorithm for solving A than we could otherwise devise. There are many other possibilities. Perhaps we are interested in expected cost rather than the worst case. Perhaps we need to solve two problems B and C to solve A. Perhaps we need to solve multiple instances of B. We leave further discussion of such variations until Part 8, because all the examples that we consider before then are of the simple type just discussed. In the particular case where we solve a problem A by simplifying another problem B, we know that A reduces to B, but not necessarily vice versa. For example, selection reduces to sorting because we can find the kth smallest element in a file by sorting the file and then indexing (or scanning) to the kth position, but this fact certainly does not imply that sorting reduces to selection. In the present context, the shortestpaths problem for weighted DAGs and the shortestpaths problem for networks with positive weights both reduce to the general shortestpaths problem. This use of reduction corresponds to the intuitive notion of one problem being more general than another. Any sorting algorithm solves any selection problem, and, if we can solve the shortestpaths problem in general networks, we certainly can use that solution for networks with various restrictions; but the converse is not necessarily true. This use of reduction is helpful, but the concept becomes more useful when we use it to gain information about the relationships between problems in different domains. For example, consider the following problems, which seem at first blush to be far removed from graph processing. Through reduction, we can develop specific relationships between these problems and the shortestpaths problem. Job scheduling A large set of jobs, of varying durations, needs to be performed. We can be working on any number of jobs at a given time, but a set of precedence relationships specify, for a set of pairs of jobs, that the first must be completed before the second can be started. What is the minimum amount of time required to complete all the jobs while satisfying all the precedence constraints? Specifically, given a set of jobs (with durations) and a set of precedence constraints, schedule the jobs (find a start time for each) so as to achieve this minimum. Screenshot depicts an example instance of the jobscheduling problem. It uses a natural network representation, which we use in a moment as the basis for a reduction. This version of the problem is perhaps the simplest of literally hundreds of versions that have been studied—versions that involve other job characteristics and other constraints, such as the assignment of personnel or other resources to the jobs, other costs associated with specific jobs, deadlines, and so forth. In this context, the version that we have described is commonly called precedenceconstrained scheduling with unlimited parallelism; we use the term job scheduling as shorthand. Screenshot Job schedulingIn this network, vertices represent jobs to be completed (with weights indicating the amount of time required), and edges represent precedence relationships between them. For example, the edges from 7 to 8 and 3 mean that job 7 must be finished before job 8 or job 3 can be started. What is the minimum amount of time required to complete all the jobs? To help us to develop an algorithm that solves the jobscheduling problem, we consider the following problem, which is widely applicable in its own right: Difference constraints Assign nonnegative values to a set variables x_{0} through x_{n} that minimize the value of x_{n} while satisfying a set of difference constraints on the variables, each of which specifies that the difference between two of the variables must be greater than or equal to a given constant. Screenshot depicts an example instance of this problem. It is a purely abstract mathematical formulation that can serve as the basis for solving numerous practical problems (see reference section). Screenshot Difference constraintsFinding an assignment of nonnegative values to the variables that minimizes the value of x_{10} subject to this set of inequalities is equivalent to the jobscheduling problem instance illustrated in Screenshot. For example, the equation x_{8} x_{7} + .32 means that job 8 cannot start until job 7 is completed. The differenceconstraint problem is a special case of a much more general problem where we allow general linear combinations of the variables in the equations. Linear programming Assign nonnegative values to a set of variables x_{0} through x_{n} that minimize the value of a specified linear combination of the variables, subject to a set of constraints on the variables, each of which specifies that a given linear combination of the variables must be greater than or equal to a given constant. Linear coding is a widely used general approach to solving a broad class of optimization problems that we will not consider in detail until Part 8. Clearly, the differenceconstraints problem reduces to linear programming, as do many other problems. For the moment, our interest is in the relationships among the differenceconstraints, jobscheduling, and shortestpaths problems. Property 21.14 The jobscheduling problem reduces to the differenceconstraints problem. Proof: Add a dummy job and a precedence constraint for each job saying that the job must finish before the dummy job starts. Given a jobscheduling problem, define a system of difference equations where each job i corresponds to a variable x_{i}, and the constraint that j cannot start until i finishes corresponds to the equation x_{j} x_{i} + c_{i}, where c_{i} is the length of job i. The solution to the differenceconstraints problem gives precisely a solution to the jobscheduling problem, with the value of each variable specifying the start time of the corresponding job. Screenshot illustrates the system of difference equations created by this reduction for the jobscheduling problem in Screenshot. The practical significance of this reduction is that we can use it to solve jobscheduling problems any algorithm that can solve differenceconstraint problems. It is instructive to consider whether we can use this construction in the opposite way: Given a jobscheduling algorithm, can we use it to solve differenceconstraints problems? The answer to this question is that the correspondence in the proof of Property 21.14 does not help us to show that the differenceconstraints problem reduces to the jobscheduling problem, because the systems of difference equations that we get from jobscheduling problems have a property that does not necessarily hold in every differenceconstraints problem. Specifically, if two equations have the same second variable, then they have the same constant. Therefore, an algorithm for job scheduling does not immediately give a direct way to solve a system of difference equations that contains two equations x_{i} – x_{j} a and x_{k} – x_{j} b, where a b. When proving reductions, we need to be aware of situations like this: A proof that A reduces to B must show that we can use an algorithm for solving B to solve any instance of A. By construction, the constants in the differenceconstraints problems produced by the construction in the proof of Property 21.14 are always nonnegative. This fact turns out to be significant. Property 21.15 The differenceconstraints problem with positive constants is equivalent to the singlesource longestpaths problem in an acyclic network. Proof: Given a system of difference equations, build a network where each variable x_{i} corresponds to a vertex i and each equation x_{i} – x_{j} c corresponds to an edge ij of weight c. For example, assigning to each edge in the digraph of Screenshot the weight of its source vertex gives the network corresponding to the set of difference equations in Screenshot. Add a dummy vertex to the network, with a zeroweight edge to every other vertex. If the network has a cycle, the system of difference equations has no solution (because the positive weights imply that the values of the variables corresponding to each vertex strictly decrease as we move along a path, and, therefore, a cycle would imply that some variable is less than itself), so report that fact. Otherwise, the network has no cycle, so solve the singlesource longestpaths problem from the dummy vertex. There exists a longest path for every vertex because the network is acyclic (see ). Assign to each variable the length of the longest path to the corresponding vertex in the network from the dummy vertex. For each variable, this path is evidence that its value satisfies the constraints and that no smaller value does so. Unlike the proof of Property 21.14, this proof does extend to show that the two problems are equivalent because the construction works in both directions. We have no constraint that two equations with the same second variable in the equation must have the same constants, and no constraint that edges leaving any given vertex in the network must have the same weight. Given any acyclic network with positive weights, the same correspondence gives a system of difference constraints with positive constants whose solution directly yields a solution to the singlesource longestpaths problem in the network. Details of this proof are left as an exercise (see Exercise 21.90).
The network in Screenshot depicts this correspondence for our sample problem, and Screenshot shows the computation of the longest paths in the network, using Program 21.6 (the dummy start vertex is implicit in the implementation). The schedule that is computed in this way is shown in Screenshot. Screenshot Job scheduleThis figure illustrates the solution to the jobscheduling problem of Screenshot, derived from the correspondence between longest paths in weighted DAGs and job schedules. The longest path lengths in the wt array that is computed by the longestpaths algorithm in Program 21.6 (see Screenshot) are precisely the required job start times (top, right column). We start jobs 0 and 5 at time 0, jobs 1, 7, and 9 at time .41, jobs 4 and 6 at time .70, and so forth. Program 21.8 is an implementation that shows the app of this theory in a practical setting. It transforms any instance of the jobscheduling problem into an instance of the longestpath problem in acyclic networks, then uses Program 21.6 to solve it. We have been implicitly assuming that a solution exists for any instance of the jobscheduling problem; however, if there is a cycle in the set of precedence constraints, then there is no way to schedule the jobs to meet them. Before looking for longest paths, we should check for this condition by determining whether the corresponding network has a cycle (see Exercise 21.100). Such a situation is typical, and a specific technical term is normally used to describe it. Definition 21.4 A problem instance that admits no solution is said to be infeasible. In other words, for jobscheduling problems, the question of determining whether a jobscheduling problem instance is feasible reduces to the problem of determining whether a digraph is acyclic. As we move to evermorecomplicated problems, the question of feasibility becomes an evermoreimportant (and evermoredifficult!) part of our computational burden. We have now considered three interrelated problems. We might have shown directly that the jobscheduling problem reduces to the singlesource longestpaths problem in acyclic networks, but we have also shown that we can solve any differenceconstraints problem (with positive constants) in a similar manner (see Exercise 21.94), as well as any other problem that reduces to a differenceconstraints problem or a jobscheduling problem. We could, alternatively, develop an algorithm to solve the differenceconstraints problem and use that algorithm to solve the other problems, but we have not shown that a solution to the jobscheduling problem would give us a way to solve the others. These examples illustrate the use of reduction to broaden the applicability of proven implementations. Indeed, modern systems coding emphasizes the need to reuse software by developing new interfaces and using existing software resources to build implementations. This important process, which is sometimes referred to as library programming, is a practical realization of the idea of reduction. Library coding is extremely important in practice, but it represents only part of the story of the implications of reduction. To illustrate this point, we consider the following version of the jobscheduling problem: Job scheduling with deadlines Allow an additional type of constraint in the jobscheduling problem to specify that a job must begin before a specified amount of time has elapsed, relative to another job. (Conventional deadlines are relative to the start job.) Such constraints are commonly needed in timecritical manufacturing processes and in many other apps, and they can make the jobscheduling problem considerably more difficult to solve. Suppose that we need to add a constraint to our example of Figures 21.22 through 21.24 that job 2 must start earlier than a certain number c of time units after job 4 starts. If c is greater than .53, then the schedule that we have computed fits the bill, since it says to start job 2 at time 1.23, which is .53 after the end time of job 4 (which starts at .70). If c is less than .53, we can shift the start time of 4 later to meet the constraint. If job 4 were a long job, this change could increase the finish time of the whole schedule. Worse, if there are other constraints on job 4, we may not be able to shift its start time. Indeed, we may find ourselves with constraints that no schedule can meet: For instance, we could not satisfy a constraint in our example that job 2 must start earlier than d time units after the start of job 6 for d less than .53 because the constraints that 2 must follow 8 and 8 must follow 6 imply that 2 must start later than .53 time units after the start of 6. If we add both of the two constraints described in the previous paragraph to the example, then both of them affect the time that 4 can be scheduled, the finish time of the whole schedule, and whether a feasible schedule exists, depending on the values of c and d. Adding more constraints of this type multiplies the possibilities and turns an easy problem into a difficult one. Therefore, we are justified in seeking the approach of reducing the problem to a known problem. Property 21.16 The jobschedulingwithdeadlines problem reduces to the shortestpaths problem (with negative weights allowed). Proof: Convert precedence constraints to inequalities using the same reduction described in Property 21.14. For any deadline constraint, add an inequality x_{i} – x_{j} d_{j}, or, equivalently x_{j} – x_{i} –d_{j}, where d_{j} is a positive constant. Convert the set of inequalities to a network using the same reduction described in Property 21.15. Negate all the weights. By the same construction given in the proof of Property 21.15, any shortestpath tree rooted at 0 in the network corresponds to a schedule. This reduction takes us to the realm of shortest paths with negative weights. It says that if we can find an efficient solution to the shortestpaths problem with negative weights, then we can find an efficient solution to the jobscheduling problem with deadlines. (Again, the correspondence in the proof of Property 21.16 does not establish the converse (see Exercise 21.91).) Adding deadlines to the jobscheduling problem corresponds to allowing negative constants in the differenceconstraints problem and negative weights in the shortestpaths problem. (This change also requires that we modify the differenceconstraints problem to properly handle the analog of negative cycles in the shortest paths problem.) These more general versions of these problems are more difficult to solve than the versions that we first considered, but they are also likely to be more useful as more general models. A plausible approach to solving all of them would seem to be to seek an efficient solution to the shortestpaths problem with negative weights. Unfortunately, there is a fundamental difficulty with this approach, and it illustrates the other part of the story in the use of reduction to assess the relative difficulty of problems. We have been using reduction in a positive sense, to expand the applicability of solutions to general problems; but it also applies in a negative sense by showing the limits on such expansion. The difficulty is that the general shortestpaths problem is too hard to solve. We see next how the concept of reduction helps us to make this statement with precision and conviction. In , we discussed a set of problems, known as the NPhard problems, that we consider to be intractable because all known algorithms for solving them require exponential time in the worst case. We show here that the general shortestpaths problem is NPhard. As mentioned briefly in and discussed in detail in Part 8, we generally take the fact that a problem is NPhard to mean not just that no efficient algorithm is known that is guaranteed to solve the problem but also that we have little hope of finding one. In this context, we use the term efficient to refer to algorithms whose running time is bounded by some polynomial function of the size of the input, in the worst case. We assume that the discovery of an efficient algorithm to solve any NPhard problem would be a stunning research breakthrough. The concept of NPhardness is important in identifying problems that are difficult to solve, because it is often easy to prove that a problem is NPhard, using the following technique: Property 21.17 A problem is NPhard if there is an efficient reduction to it from any NPhard problem. Proof: This property depends on the precise meaning of an efficient reduction from one problem A to another problem B. We defer such definitions to Part 8 (two different definitions are commonly used). For the moment, we simply use the term to cover the case where we have efficient algorithms both to transform an instance of A to an instance of B and to transform a solution of B to a solution of A. Now, suppose that we have an efficient reduction from an NPhard problem A to a given problem B. The proof is by contradiction: If we have an efficient algorithm for B, then we could use it to solve any instance of A in polynomial time, by reduction (transform the given instance of A to an instance of B, solve that problem, then transform the solution). But no known algorithm can make such a guarantee for A (because A is NPhard), so the assumption that there exists a polynomialtime algorithm for B is incorrect: B is also NPhard. This technique is extremely important because people have used it to show a huge number of problems to be NPhard, giving us a broad variety of problems from which to choose when we want to develop a proof that a new problem is NPhard. For example, we encountered one of the classic NPhard problems in . The Hamiltonpath problem, which asks whether there is a simple path containing all the vertices in a given graph, was one of the first problems shown to be NPhard (see reference section). It is easy to formulate as a shortestpaths problem, so Property 21.17 implies that the shortestpaths problem itself is NPhard. Property 21.18 In networks with edge weights that could be negative, shortestpaths problems are NPhard. Proof: Our proof consists of reducing the Hamiltonpath problem to the shortestpaths problem. That is, we show that we could use any algorithm that can find shortest paths in networks with negative edge weights to solve the Hamiltonpath problem. Given an undirected graph, we build a network with edges in both directions corresponding to each edge in the graph and with all edges having weight –1. The shortest (simple) path starting at any vertex in this network is of length 1 – V if and only if the graph has a Hamilton path. Note that this network is replete with negative cycles. Not only does every cycle in the graph correspond to a negative cycle in the network, but also every edge in the graph corresponds to a cycle of weight –2 in the network. The implication of this construction is that the shortestpaths problem is NPhard, because if we could develop an efficient algorithm for the shortestpaths problem in networks, then we would have an efficient algorithm for the Hamiltonpath problem in graphs. One response to the discovery that a given problem is NPhard is to seek versions of that problem that we can solve. For shortestpaths problems, we are caught between having a host of efficient algorithms for acyclic networks or for networks in which edge weights are nonnegative and having no good solution for networks that could have cycles and negative weights. Are there other kinds of networks that we can address? That is the subject of . There, for example, we see that the jobschedulingwithdeadlines problem reduces to a version of the shortestpaths problem that we can solve efficiently. This situation is typical: As we address evermoredifficult computational problems, we find ourselves working to identify the versions of those problems that we can expect to solve. As these examples illustrate, reduction is a simple technique that is helpful in algorithm design, and we use it frequently. Either we can solve a new problem by proving that it reduces to a problem that we know how to solve, or we can prove that the new problem will be difficult by proving that a problem that we know to be difficult reduces to the problem in question. Table 21.3 gives us a more detailed look at the various implications of reduction results among the four general problem classes that we discussed in . Note that there are several cases where a reduction provides no new information; for example, although selection reduces to sorting and the problem of finding longest paths in acyclic networks reduces to the problem of finding shortest paths in general networks, these facts shed no new light on the relative difficulty of the problems. In other cases, the reduction may or may not provide new information; in still other cases, the implications of a reduction are truly profound. To develop these concepts, we need a precise and formal description of reduction, as we discuss in detail in Part 8; here, we summarize informally the most important uses of reduction in practice, with examples that we have already seen. Table 21.3. Reduction implications
Exercises

Previous Next 