Previous    Next

FURTHER READING

The IBM Fortran H compiler used dominators to identify loops in controlflow graphs of basic blocks of machine instructions [Lowry and Medlock 1969]. Lengauer and Tarjan [1979] developed the near-linear-time algorithm for finding dominators in a directed graph, and proved the related theorems mentioned in this chapter. It is common to use this algorithm while mentioning the existence [Harel 1985] of a more complicated linear-time algorithm. Finding the "best" node above a given spanning-forest node is an example of a union-find problem; analyses of balanced path-compression algorithms for union-find (such as the "sophisticated" version of the Lengauer-Tarjan algorithm) can be found in many algorithms textbooks (e.g., Sections 22.3-22.4 of Cormen et al. [1990]). Static single-assignment form was developed by Wegman, Zadeck, Alpern, and Rosen [Alpern et al. 1988; Rosen et al. 1988] for efficient computation of dataflow problems such as global value numbering, congruence of variables, aggressive dead-code removal, and constant propagation with conditional branches [Wegman and Zadeck 1991]. Control-dependence was formalized by Ferrante et al. [1987] for use in an optimizing compiler for vector parallel machines. Cytron et al. [1991] describe the efficient computation of SSA and control-dependence graphs using dominance frontiers and prove several of the theorems mentioned in this chapter. Wolfe [1996] describes several optimization algorithms on SSA (which he calls factored use-def chains), including induction-variable analysis. It is useful to perform several transformations on the flowgraph before conversion to SSA form. These include the conversion of while-loops to repeat-loops (); and the insertion of loop preheader nodes (see page 382), postbody nodes [Wolfe 1996] (Exercise 18.6), and landing pads for loop-exit edges [Rosen et al. 1988] (edge-splitting effectively accomplishes the insertion of landing pads). Such transformations provide locations into which statements (such as loop-invariant computations or common subexpressions) may be placed. Varieties of functional intermediate representations Functional intermediate forms are all based on lambda-calculus, more or less, but they differ in three important respects:

  1. Some are strict and some are lazy (see ).

  2. Some have arbitrary nesting of subexpressions; some have atomic arguments; and some have atomic arguments +λ meaning that all arguments except anonymous functions are atomic.
  3. Some permit nontail calls (direct style) and some support only tail calls (continuation-passing style).

Distinction (1) ceases to matter in continuation-passing style. The design space of these options has been well explored, as this table shows:

 

Direct style

 
 

Strict

Lazy

Continuation passing

Arbitrarily nested sub expressions

Cardelli [1984], Cousineau et al. [1985]

Augustsson [1984]

 

Atomic arguments + λ

Flanagan et al. [1993]

 

Steele [1978], Kranz et al. [1986]

Atomic arguments

Tarditi [1997]

Peyton Jones [1992]

Appel [1992]

The functional intermediate form shown in Image 19.18 fits in the lower left-hand corner, along with Tarditi [1997]. Kelsey [1995] shows how to convert between SSA and continuation-passing style.


JaVaScreenshot Previous    Next
Comments