Previous Next |

The quicksort family of algorithms that we studied in are based on the selection operationâ€”finding the kth smallest element in a file. We saw that performing selection is akin to dividing a file into two parts, the k smallest elements and the N - k largest elements. In this chapter, we examine a family of sorting algorithms based on a complementary process, mergingâ€”combining two ordered files to make one larger ordered file. Merging is the basis for a straightforward divide-and-conquer (see ) sorting algorithm and for a bottom-up counterpart, both of which are easy to implement.

Selection and merging are complementary operations in the sense that selection splits a file into two independent files, whereas merging joins two independent files to make one file. The contrast between these operations also becomes evident when we apply the divide-and-conquer paradigm to create a sorting method. We can rearrange the file such that, when two parts are sorted, the whole file is ordered; alternatively, we can break the file into two parts to be sorted and then combine the ordered parts to make the whole ordered file. We have already seen what happens in the first instance: that is quicksort, which consists of a selection procedure followed by two recursive calls. In this chapter, we shall look at mergesort, which is quicksort's complement in that it consists of two recursive calls followed by a merging procedure.

One of mergesort's most attractive properties is that it sorts a file of N elements in time proportional to N log N, no matter what the input. In , we shall see another algorithm that is guaranteed to finish in time proportional to N log N; it is called heapsort. The prime disadvantage of mergesort is that extra space proportional to N is needed in straightforward implementations. We can overcome this handicap, but doing so is sufficiently complicated and costly that it is generally not worthwhile in practice, particularly in light of the heapsort alternative. Mergesort is no more difficult to code than is heapsort, and the length of the inner loop is between those of quicksort and heapsort, so mergesort is worth considering if speed is of the essence, bad worst-case performance cannot be tolerated, and extra space is available.

A guaranteed N log N running time can be a liability. For example, in , we saw that there are methods that can adapt to run in linear time in certain special situations, such as when there is a significant amount of order in the file, or when there are only a few distinct keys. By contrast, the running time of mergesort depends primarily on only the number of input keys and is virtually insensitive to their order.

Mergesort is a stable sort, and this feature tips the balance in its favor for apps where stability is important. Competitive methods such as quicksort and heapsort are not stable. Various techniques to make such methods stable tend to require extra space; mergesort's extra-space requirement thus becomes less significant if stability is a prime consideration.

Another feature of mergesort that is important in certain situations is that mergesort is normally implemented such that it accesses the data primarily sequentially (one item after the other). For example, mergesort is the method of choice for sorting a linked list, where sequential access is the only kind of access available. For similar reasons, as we shall see in , merging is often chosen as the basis for sorting on special-purpose and high-performance machines, because it is often the case that sequential access to data is fastest in such environments.

Previous Next |