## CONVERTING BACK FROM SSA FORM

After program transformations and optimization, a program in static single assignment form must be translated into some executable representation with out φ-functions. The definition *y* ← φ(*x*_{1}, *x*_{2}, *x*_{3}) can be translated as "move *y* ← *x*_{1} if arriving along predecessor edge 1, move *y* ← *x*_{2} if arriving along predecessor edge 2, and move *y* ← *x*_{3} if arriving along predecessor edge 3." To "implement" this definition in an edge-split SSA form, for each *i* we insert the move *y* ← *x*_{i} at the end of the *i*th predecessor of the block containing the φ-function. The *unique successor or predecessor* property prevents redundant moves from being inserted; in Image 19.2b (without the property), block 2 would need a move *a*_{3} ← *a*_{1} that is redundant if the *then* branch is taken; but in Image 19.2c, the move *a*_{3} ← *a*_{1} would be in block 5, and never executed redundantly. Now we can do register allocation on this program, as described in . Although it is tempting simply to assign *x*_{1} and *x*_{2} the same register if they were derived from the same variable *x* in the original program, it could be that program transformations on the SSA form have made their live ranges interfere (see Exercise 19.11). Thus, we ignore the original derivation of the different SSA variables, and we rely on coalescing (copy propagation) in the register allocator to eliminate almost all of the move instructions.

### LIVENESS ANALYSIS FOR SSA

We can efficiently construct the interference graph of an SSA program, just prior to converting the φ-functions to move instructions. For each variable *v*, Algorithm 19.17 walks backward from each use, stopping when it reaches *v*'s definition. The *dominance property of SSA form* ensures that the algorithm will always stay in the region dominated by the definition of *v*. For many variables this region is small; contrast this with the situation in Image 19.14 (a non-SSA program), where the algorithm applied to variable *x*_{1} would walk upwards through the 1 → 3 edge and traverse the entire program. Because this algorithm processes only the blocks where *v* is live, its running time is proportional to the size of the interference graph that it constructs (see Exercise 19.12).

Algorithm 19.17 as shown uses recursion (when *LiveInAtStatement* calls *LiveOutAtBlock*), and also *tail recursion* (when *LiveInAtStatement* calls *Live-OutAtStatement*, when *LiveOutAtStatement* calls *LiveInAtStatement*, andwhen *LiveOutAtBlock* calls *LiveOutAtStatement*). Some coding languages or compilers can compile tail recursion very efficiently as a *goto* - see . But when implementing this algorithm in compilers that do not support efficient tail calls, then instead of tail recursion it might be best to use explicit goto's, or use work lists for *LiveOutAtStatement* and *LiveInAt-Statement*.

Comments