Previous    Next

FURTHER READING

Sites [1992] discusses several kinds of instruction- and data-cache alignment optimizations. Efficient approximation algorithms for the traveling salesman problem (TSP) can be applied to basic-block ordering, to minimize the instruction-fetch penalties for branches [Young et al. 1997]. Mowry et al. [1992] describe an algorithm for inserting prefetch instructions in for-loops, taking care not to insert prefetches (which do, after all, have an instruction-issue cost) where the data in question is likely to be in cache already. The Lisp Machine's garbage collector used depth-first search to group related objects on the same page to minimize page faults [Moon 1984]. Koopman et al. [1992] describe prefetching for a garbage-collected system. Diwan et al. [1994], Reinhold [1994], and Gonc¸alves and Appel [1995] analyze the cache locality of programs that use copying garbage collection. For mark-sweep collectors, Boehm et al. [1991] suggest that (to improve page-level locality) new objects should not be allocated into mostly full pages containing old objects, and that the sweep phase should be done incrementally so that pages and cache blocks are "touched" by the sweep just before they are allocated by the program. The techniques for optimizing the memory locality of programs with nested loops have much in common with techniques for parallelizing loops. For example, in a parallel implementation of matrix multiplication, having each processor compute one row of the C matrix requires that processor to have N2 elements of A and N elements of B, or O(N2) words of interprocessor communication. Instead, each processor should compute one block of C (where the block size is Java ScreenShot); then each processor requires Java ScreenShot words of A and of B, which is only O(N1.5) words of communication. Many of the compilers that use blocking and loop-nest optimizations to generate the most memory-efficient code for uniprocessors are parallelizing compilers - with the parallelization turned off! To generate good parallel code - or to perform many of the loop optimizations described in this chapter, such as blocking and interchange - it's necessary to analyze how array accesses are data-dependent on each other. Array dependence analysis is beyond the scope of this tutorial, but is covered well by Wolfe [1996]. Callahan et al. [1990] show how to do scalar replacement; Carr and Kennedy [1994] show how to calculate the right amount of unroll-and-jam for a loop based on the characteristics of the target machine. Wolf and Lam [1991] describe a compiler optimization algorithm that uses blocking, tiling (like blocking but where the tiles can be skewed instead of rectangular), and loop interchange to achieve locality improvements on many kinds of nested loops.

The textbook by Wolfe [1996] covers almost all the techniques described in this chapter, with particular emphasis on automatic parallelization but also with some treatment of improving memory locality.


JaVaScreenshot Previous    Next
Comments