Previous   Next

Improvements to the Basic Algorithm

As we saw with quicksort, we can improve most recursive algorithms by handling small cases differently. The recursion guarantees that the method will be used often for small cases, so improvements in handling them lead to improvements in the whole algorithm. Thus, just as it did with quicksort, switching to insertion sort for small subfiles will improve the running time of a typical mergesort implementation by 10 to 15 percent.

A second improvement that is reasonable to consider for mergesort is to eliminate the time taken to copy to the auxiliary array used for merging. To do so, we arrange the recursive calls such that the computation switches the roles of the input array and the auxiliary array at each level. One way to proceed is to implement two versions of the routines—one taking its input in aux and its output in a, and the other taking its input in a and its output in aux—then have the two versions call each other. A different approach is shown in Program 8.4, which makes one copy of the array at the beginning, then uses Program 8.1 and switches arguments in the recursive calls to eliminate the explicit array copy operation. Instead, we switch back and forth between putting the merged output in the auxiliary array and putting it in the input array. (This program is a tricky one.)

This technique eliminates the array copy at the expense of putting back into the inner loop the tests for whether the input arrays are exhausted. (Recall that our technique for eliminating those tests in Program 8.2 involved making the array bitonic during the copy.) That loss can be regained via a recursive implementation of the same idea: We implement routines for both merge and mergesort, one each for putting arrays in increasing order and in decreasing order. With this strategy, it is possible to bring back the bitonic strategy and thus to arrange that the inner loop for the merge never needs sentinels.

Given that it uses up to four copies of the basic routines and some mindbending recursive argument switchery, this superoptimization is only recommended for experts (or students!), but it does speed up mergesort considerably. The experimental results that we discuss in indicate that the combination of all these improvements speeds up mergesort by a factor of about 40 percent but still leaves mergesort about 25 percent slower than quicksort. These numbers are dependent on the implementation and on the machine, but similar results are likely in a variety of situations.

Mergesort with no copying

This recursive program is set up to sort b, leaving the result in a. Thus, the recursive calls are written to leave their result in b, and we use Program 8.1 to merge those files from b into a. In this way, all the data movement is done during the course of the merges.

static ITEM[] aux; static void mergesortABr (ITEM[] a, ITEM[] b, int l, int r) { if (r <= l) return; int m = (r+l)/2; mergesortABr(b, a, l, m); mergesortABr(b, a, m+1, r); mergeAB(a, l, b, l, m, b, m+1, r); } static void mergesortAB(ITEM[] a, int l, int r) { aux = new ITEM[a.length]; for (int i = l; i <= r; i++) aux[i] = a[i]; mergesortABr(a, aux, l, r); }

Other implementations of merging that involve an explicit test for the first file being exhausted may lead to a greater (but not much) variation of running time depending on the input. In random files, the size of the other subfile when the first subfile exhausts will be small, and the cost of moving to the auxiliary array still will be proportional to the subfile size. We might consider improving the performance of mergesort when a great deal of order is present in the file by skipping the call on merge when the file is already in sorted order, but this strategy is not effective for many types of files.

Exercises

Implement an abstract in-place merge that uses extra space proportional to the size of the smaller of the two arrays to be merged. (Your method should cut in half the space requirement for mergesort.)

Run mergesort for large random files, and make an empirical determination of the average length of the other subfile when the first subfile exhausts, as a function of N (the sum of the two subfile sizes for a given merge).

Suppose that Program 8.3 is modified to skip the call on merge when a[m]<a[m+1]. How many comparisons does this alternative save when the file to be sorted is already in sorted order?

Run the modified algorithm suggested in Exercise 8.18 for large random files. Determine empirically the average number of times the merge is skipped, as a function of N (the original file size for the sort).

Suppose that mergesort is to be run on h-sorted files for small h. How would you change the merge routine to take advantage of this property of the input? Experiment with shellsort–mergesort hybrids based on this routine.

Develop a merge implementation that reduces the extra space requirement to max(M,N/M), based on the following idea. Divide the array into N/M blocks of size M (for simplicity in this description, assume that N is a multiple of M). Then, (i) considering the blocks as records with their first key as the sort key, sort them using selection sort; and (ii) run through the array merging the first block with the second, then the second block with the third, and so forth.

Prove that the method of Exercise 8.21 runs in linear time.

Implement bitonic mergesort with no copying.

 Previous   Next