Hidden Surface Removal

The problem with drawing polygons is that you need to draw them in such a way that you don't see polygons that are hidden or are partially obscured by other polygons. This problem is generally solved with a correct hidden surface–removal algorithm or a combination of several hidden surface–removal algorithms. In previous chapters, you used only back-face removal and clipping as your hidden surface–removal algorithms. With back-face removal, you can draw only convex polyhedrons. In this section, we present a few other hidden surface–removal techniques that enable you to draw polygons that form any shape. Keep in mind that different hidden surface–removal algorithms work better for different situations. What works for a static world might not work for a moving 3D object, for example. In this chapter, we chose an algorithm that works better for 3D objects that can move and rotate around the world.

Painter's Algorithm

A basic hidden surface–removal algorithm is the painter's algorithm. When painting on a canvas, a painter first draws the background, then objects closer to the foreground, and so on until the foremost object is painted. With this technique, anything drawn in the foreground is painted over what is already on the canvas. For 3D graphics, the idea translates to drawing polygons in a back-to-front order. To implement the painter's algorithm, all the polygons are first sorted from back to front, relative to the camera. The difficulty here is getting a correct sorting algorithm. You could just sort by the midpoint of each polygon, but this doesn't work in all cases. Consider the two projected polygons in Screenshot, for example.

Screenshot Looking at these two projected polygons, the painter's algorithm says that polygon B should be drawn first because it's behind polygon A.

Java graphics 09fig01.gif

With the two projected polygons in Screenshot, you can easily tell that polygon A is in front of polygon B, so polygon B should be drawn first. However, a problem arises when measuring the distance to the camera. By using the midpoints of the polygons, the midpoint of polygon A could actually be behind the midpoint of polygon B, in which case polygon A would be drawn first. Obviously, this is the incorrect behavior. Another problem with the painter's algorithm is that drawing back to front can involve a lot of overdraw (drawing each pixel more than once), so it's not the ideal solution for drawing an entire scene.

Reverse Painter's Algorithm

The problem of overdraw with the painter's algorithm can be fixed with the reverse painter's algorithm. The reverse algorithm draws polygons from front to back rather than back to front. In this algorithm, you don't draw any pixel that's already been drawn, so you can use this algorithm without any overdraw. You just continue drawing polygons until the entire view window is filled. However, the reverse painter's algorithm still needs a polygon-sorting technique that is correct 100% of the time. Also, it works only if polygons completely surround the camera (as with enclosed areas) so that they can fill the view.


Z-buffering, also called depth buffering, doesn't have the problems of the painter's algorithm and is fairly easy to implement. With z-buffering, the depth of each pixel in the polygon is recorded in a buffer the same size as the view window. A pixel is drawn only if the depth of that pixel is closer than the depth currently in that location in the z-buffer. This results in a pixel-perfect 3D scene, no matter what order the polygons are drawn. The idea is shown in Screenshot.

Screenshot The z-buffer can be used to draw polygons correctly. The numbers represent the depth of each pixel in the polygons.

Java graphics 09fig02.gif

In the example in Screenshot, the z-buffer starts off empty, with the depth of each pixel set to infinity. Every pixel of the first polygon is closer than the values in the z-buffer, so every pixel is drawn and the z-buffer is set to the polygon's depth values. The second polygon actually intersects the first, so the part of the second polygon behind the first polygon isn't drawn. Z-buffering does have the drawback of requiring the extra memory for the buffer. With a 16-bit depth, a 1,024x768 screen would require 1.5MB of memory. That's not bad compared to the amount of memory most computers have today. The major drawback is the extra time it takes to draw polygons. Checking the z-buffer for every pixel you draw gives you a bit of an overhead. This is especially noticeable if z-buffering is used for every pixel on the screen. In my tests with the examples in this tutorial, it's two to three times slower than plain texture mapping if every pixel on the screen is drawn. However, because 3D objects usually take up only small amounts of visual screen space, the overhead won't be as bad as using z-buffering for the entire world.

Z-Buffer Using 1/z

One problem with z-buffering is that the z-buffer has the same accuracy no matter how far away the polygon is from the camera. For example, if your z-buffer has 1-unit (or 1-texel) accuracy, overlapping polygons close to the camera won't merge very well—part of a polygon might have a depth of 10.7, not 11. You really need something more granular for polygons close to the camera. You can fix this in a couple ways:

  • Use 32-bit floats (or 32-bit fixed points) for z-buffer depth. This is more accurate but takes up twice the memory and can be slower.
  • Use the inverse of the depth, or 1/z.

When using z-buffering, you don't really need the exact depth value for every pixel in a polygon. All you really need is to compare depths between pixels correctly, and using 1/z still lets you do that. Also, 1/z is linear in screen space, as shown with the example polygon in Screenshot. This gives you a couple advantages:

  • Using 1/z gives you high depth precision for objects close to the camera, where you need it most.
  • Because 1/z changes linearly with screen space, as opposed to the actual z-depth, it's easy to set the depth for a horizontal scan—easier than texture mapping (which is not linear with screen space). You don't have to interpolate every 16 pixels like you did with texture mapping, so it's fast and accurate.
Screenshot 1/z is linear in screen space.

Java graphics 09fig03.gif

Z-buffering using 1/z is the technique used for the 3D objects in this tutorial. Because you're using 16-bit integer values for the depth, you'll want to keep the depth between 0 and Short.MAX_VALUE. Assume that z Screenshot 1, and use something like this:

Short.MAX_VALUE / z

Using this formula means that the greater the value is, the closer the depth is; 0 is infinite depth. Now you'll implement a z-buffer. The ZBuffer class, shown in Listing 9.1, creates a basic 16-bit z-buffer designed to work with 1/z values.

Listing 9.1 ZBuffer.java
package com.brackeen.javagamebook.graphics3D;
 The ZBuffer class implements a z-buffer, or depth-buffer,
 that records the depth of every pixel in a 3D view window.
 The value recorded for each pixel is the inverse of the
 depth (1/z), so there is higher precision for close objects
 and a lower precision for far-away objects (where high
 depth precision is not as visually important).
public class ZBuffer {
 private short[] depthBuffer;
 private int width;
 private int height;
 Creates a new z-buffer with the specified width and height.
 public ZBuffer(int width, int height) {
 depthBuffer = new short[width*height];
 this.width = width;
 this.height = height;
 Gets the width of this z-buffer.
 public int getWidth() {
 return width;
 Gets the height of this z-buffer.
 public int getHeight() {
 return height;
 Clears the z-buffer. All depth values are set to 0.
 public void clear() {
 for (int i=0; i<depthBuffer.length; i++) {
 depthBuffer[i] = 0;
 Sets the depth of the pixel at specified offset,
 overwriting its current depth.
 public void setDepth(int offset, short depth) {
 depthBuffer[offset] = depth;
 Checks the depth at the specified offset, and if the
 specified depth is lower (is greater than or equal to the
 current depth at the specified offset), then the depth is
 set and this method returns true. Otherwise, no action
 occurs and this method returns false.
 public boolean checkDepth(int offset, short depth) {
 if (depth >= depthBuffer[offset]) {
 depthBuffer[offset] = depth;
 return true;
 else {
 return false;

Note that the ZBuffer class has two methods to manipulate the ZBuffer at a particular offset: setDepth() and checkDepth(). The setDepth() method just sets the depth at the specified offset in the z-buffer, erasing whatever value is already there. The checkDepth() method sets the depth only if the value is greater than the current depth at that offset; it returns true if this is the case, and false otherwise. Finally, the clear() method sets every value in the z-buffer to 0, the equivalent of infinite depth. In this case, you need to call clear() every frame.

Calculating the Z-Depth

Next, you'll get the equations for calculating the depth for every point in the polygon. Actually, you already have the equation. As you might recall from , "Texture Mapping and Lighting," you needed Pz, the depth of the polygon at a specific point on the view window, W, to figure out how to texture-map:

Pz = d(UxV•O)/(UxV•W)

Here, d is the distance from the camera to the view window, U and V are the texture x and y directions, and O is the texture origin (or any point on the polygon's plane). So, you're all set. (Aren't you glad you derived the equations for texture mapping in that chapter?) The inverse of this equation becomes this:

1/z = (UxV•W)/(d(UxV•O))

Most of this you can calculate for the entire polygon rather than for every pixel:

float w = SCALE * MIN_DISTANCE * Short.MAX_VALUE /
 (viewWindow.getDistance() *

Remember that the vector C is set to UxV, so you don't have to recalculate UxV. In the equation, multiply the value by SCALE to get it ready to convert to fixed-point. Also, multiply by MIN_DISTANCE, which is the minimum distance you want z-buffering to work. This value is set to 12. This gives a little more accuracy for objects drawn far away, but it makes objects drawn with a distance less than 12 have incorrect depths. This is okay because you'll never see objects closer than a distance of 12 when you implement collision detection in , "Collision Detection." Finally, when you are texture mapping, before you texture map a horizontal scan, calculate the depth of the first pixel (depth) and the change in depth (dDepth):

float z = c.getDotProduct(viewPos);
float dz = c.x;
int depth = (int)(w*z);
int dDepth = (int)(w*dz);

Then, for every pixel in the scan, check the depth, set the pixel if the depth is set, and increment the depth:

if (zBuffer.checkDepth(offset, (short)(depth >> SCALE_BITS))) {
 doubleBufferData[offset] =
 texture.getColor(tx >> SCALE_BITS, ty >> SCALE_BITS);

That's it for drawing with the z-buffer. We sum up the process of calculating the depth for every pixel in a polygon in the ZBufferedRenderer class, which is a subclass of ShadedSurfacePolygonRenderer. This class is just like its parent class, except that when it renders, it checks the depth for every pixel is draws. It also clears the z-buffer at the start of every frame (or creates a new one if the size of the view window changed):

public void startFrame(Graphics2D g) {
 // initialize depth buffer
 if (zBuffer == null ||
 zBuffer.getWidth() != viewWindow.getWidth() ||
 zBuffer.getHeight() != viewWindow.getHeight())
 zBuffer = new ZBuffer(
 viewWindow.getWidth(), viewWindow.getHeight());
 else if (clearViewEveryFrame) {

That's it for z-buffering. Now we move on to actually taking advantage of it by creating 3D objects that move around the world.