Previous    Next

## EXERCISES

• 19.1 Write an algorithm, using depth-first search, to number the nodes of a tree in depth-first order and to annotate each node with the number of its highest-numbered descendent. Show how these annotations can be used − once your preprocessing algorithm has been run on a dominator tree − to answer a query of the form "does node i dominate node j?" in constant time.

• Use Algorithm 19.9 to calculate the dominators of the Flowgraph of Exercise 18.1, showing the semidominators and spanning forest at various stages.
• For each of the graphs of Image 18.1 and Image 18.2, calculate the immediate dominator tree (using either Algorithm 19.9 or the algorithm in ), and for each node n calculate DFlocal[n], DFup[n], and DF.
• *19.4 Prove that, for any node v, Algorithm 19.9+19.10b always initializes best[v] ← v (in the Link function) before calling AncestorWithLowestSemi(v).
• Calculate the dominance frontier of each node in each of these graphs:
1. The graph of Image 2.8.

2. The graph of Exercise 2.3a.
3. The graph of Exercise 2.5a.
4. The graph of Image 3.27.
• **19.6 Prove that

as follows:

1. Show that DFlocal[n]⊆ DF[n];

2. Show that for each child Z of n, DFup[Z]⊆ DF[n];
3. If there is a node Y in DF[n], then therefore there is an edge UY such that n dominates U but does not strictly dominate Y. Show that if Y = n, then YDFlocal[n], andif Yn, then YDFup[Z] for some child Z of N.
4. Combine these lemmas into a proof of the theorem.
• Convert this program to SSA form:

Show your work after each stage:

1. Add a start node containing initializations of all variables.

2. Draw the dominator tree.
3. Calculate dominance frontiers.
4. Insert φ-functions.
5. Add subscripts to variables.
6. Use Algorithm 19.17 to build the interference graph.
7. Convert back from SSA form by inserting move instructions in place of φ-functions.
• This C (or Java) program illustrates an important difference between def-use chains and SSA form:
```int f(int i, int j) {
int x,y;
switch(i) {
case 0: x=3;
case 1: x=1;
case 2: x=4;
case 3: x=1;
case 4: x=5;
default: x=9;
}
switch(j) {
case 0: y=x+2;
case 1: y=x+7;
case 2: y=x+1;
case 3: y=x+8;
case 4: y=x+2;
default: y=x+8;
return y;
}
```

1. Draw the control-Flow graph of this program.

2. Draw the use-def and def-use data structures of the program: For each definition site, draw a linked-list data structure pointing to each use site, and vice versa.
3. Starting from the CFG of part (a), convert the program to SSA form. Draw data structures representing the uses, defs, and φ-functions, as described at the beginning of .
4. Count the total number of data-structure nodes in the use-def data, and the total number in the SSA data structure. Compare.
5. Approximate the total sizes of the use-def data structures, and the SSA data structures, if there were N cases in each switch instead of 6.
• *19.9 Suppose the graph of Exercise 2.3a is the control-Flow graph of a program, and in block 1 there is an assigment to a variable v.
1. Convert the graph to SSA form (insert φ-functions for v).

2. Show that for any N, there is a "ladder" CFG with O(N) blocks, O(N) edges, and O(N) assignment statements (all in the first block!), such that the number of φ-functions in the SSA form is N2.
3. Write a program whose CFG looks like this.
4. Show that a program containing deeply nested repeat-until loops can have the same N2 blowup of φ-functions.
• *19.10 Algorithm 19.7 uses a stack for each variable, to remember the current active definition of the variable. This is equivalent to using environments to process nested scopes, as explained for type-checking.
1. Rewrite Algorithm 19.7, calling upon the imperative environments of package Symbol (whose interface is given in Program 5.5) instead of using explicit stacks.

2. Rewrite Algorithm 19.7, using the functional-style symbol tables whose Table class is described on page 110.
• Show that optimization on an SSA program can cause two SSA variables a1 and a2, derived from the same variable a in the original program, to have overlapping live ranges as described on page 428. Hint: Convert this program to SSA, and then do exactly one constant-propagation optimization.
```while c<0 do (b := a; a := M[x]; c := a+b);
return a;
```

• *19.12 Let Vc and Ec be the nodes and edges of the CFG, and Vi and Ei be the nodes and edges of the interference graph produced by Algorithm 19.17. Let N = |Vc| + |Ec| + |Vi| + |Ei|.
1. Show that the run time of Algorithm 19.17 on the following (weird) program is asymptotically proportional to N1.5:

``` v1 ← 0
v2 ← 0
⋮
vm ← 0
goto L1
L1 : goto L2
L2 : goto L3
⋮
Lm2 :
w1 ← v1
w2 ← v2
⋮
wm ← vm
```

2. Show that if every block defines at least one variable, and has no more than c statements and no more than c out-edges (for some constant c), then the time complexity of Algorithm 19.17 is O(N). Hint: Whenever LiveOutAtBlock is called, there will be at most c calls to LiveOutAtStatement, and at least one will add an edge to the interference graph.

 JaVa Previous    Next
Comments