Previous Next |

## Hidden Surface RemovalThe problem with drawing polygons is that you need to draw them in such a way that you don't see polygons that are hidden or are partially obscured by other polygons. This problem is generally solved with a correct hidden surface–removal algorithm or a combination of several hidden surface–removal algorithms. In previous chapters, you used only back-face removal and clipping as your hidden surface–removal algorithms. With back-face removal, you can draw only convex polyhedrons. In this section, we present a few other hidden surface–removal techniques that enable you to draw polygons that form any shape. Keep in mind that different hidden surface–removal algorithms work better for different situations. What works for a static world might not work for a moving 3D object, for example. In this chapter, we chose an algorithm that works better for 3D objects that can move and rotate around the world. ## Painter's AlgorithmA basic hidden surface–removal algorithm is the painter's algorithm. When painting on a canvas, a painter first draws the background, then objects closer to the foreground, and so on until the foremost object is painted. With this technique, anything drawn in the foreground is painted over what is already on the canvas. For 3D graphics, the idea translates to drawing polygons in a back-to-front order. To implement the painter's algorithm, all the polygons are first sorted from back to front, relative to the camera. The difficulty here is getting a correct sorting algorithm. You could just sort by the midpoint of each polygon, but this doesn't work in all cases. Consider the two projected polygons in Screenshot, for example. ## Screenshot Looking at these two projected polygons, the painter's algorithm says that polygon B should be drawn first because it's behind polygon A.With the two projected polygons in Screenshot, you can easily tell that polygon A is in front of polygon B, so polygon B should be drawn first. However, a problem arises when measuring the distance to the camera. By using the midpoints of the polygons, the midpoint of polygon A could actually be behind the midpoint of polygon B, in which case polygon A would be drawn first. Obviously, this is the incorrect behavior. Another problem with the painter's algorithm is that drawing back to front can involve a lot of overdraw (drawing each pixel more than once), so it's not the ideal solution for drawing an entire scene. ## Reverse Painter's AlgorithmThe problem of overdraw with the painter's algorithm can be fixed with the reverse painter's algorithm. The reverse algorithm draws polygons from front to back rather than back to front. In this algorithm, you don't draw any pixel that's already been drawn, so you can use this algorithm without any overdraw. You just continue drawing polygons until the entire view window is filled. However, the reverse painter's algorithm still needs a polygon-sorting technique that is correct 100% of the time. Also, it works only if polygons completely surround the camera (as with enclosed areas) so that they can fill the view. ## Z-BufferZ-buffering, also called depth buffering, doesn't have the problems of the painter's algorithm and is fairly easy to implement. With z-buffering, the depth of each pixel in the polygon is recorded in a buffer the same size as the view window. A pixel is drawn only if the depth of that pixel is closer than the depth currently in that location in the z-buffer. This results in a pixel-perfect 3D scene, no matter what order the polygons are drawn. The idea is shown in Screenshot. ## Screenshot The z-buffer can be used to draw polygons correctly. The numbers represent the depth of each pixel in the polygons.In the example in Screenshot, the z-buffer starts off empty, with the depth of each pixel set to infinity. Every pixel of the first polygon is closer than the values in the z-buffer, so every pixel is drawn and the z-buffer is set to the polygon's depth values. The second polygon actually intersects the first, so the part of the second polygon behind the first polygon isn't drawn. Z-buffering does have the drawback of requiring the extra memory for the buffer. With a 16-bit depth, a 1,024x768 screen would require 1.5MB of memory. That's not bad compared to the amount of memory most computers have today. The major drawback is the extra time it takes to draw polygons. Checking the z-buffer for every pixel you draw gives you a bit of an overhead. This is especially noticeable if z-buffering is used for every pixel on the screen. In my tests with the examples in this tutorial, it's two to three times slower than plain texture mapping if every pixel on the screen is drawn. However, because 3D objects usually take up only small amounts of visual screen space, the overhead won't be as bad as using z-buffering for the entire world. ## Z-Buffer Using 1/zOne problem with z-buffering is that the z-buffer has the same accuracy no matter how far away the polygon is from the camera. For example, if your z-buffer has 1-unit (or 1-texel) accuracy, overlapping polygons close to the camera won't merge very well—part of a polygon might have a depth of 10.7, not 11. You really need something more granular for polygons close to the camera. You can fix this in a couple ways: - Use 32-bit floats (or 32-bit fixed points) for z-buffer depth. This is more accurate but takes up twice the memory and can be slower.
- Use the inverse of the depth, or 1/z.
When using z-buffering, you don't really need the exact depth value for every pixel in a polygon. All you really need is to compare depths between pixels correctly, and using 1/z still lets you do that. Also, 1/z is linear in screen space, as shown with the example polygon in Screenshot. This gives you a couple advantages: - Using 1/z gives you high depth precision for objects close to the camera, where you need it most.
- Because 1/z changes linearly with screen space, as opposed to the actual z-depth, it's easy to set the depth for a horizontal scan—easier than texture mapping (which is not linear with screen space). You don't have to interpolate every 16 pixels like you did with texture mapping, so it's fast and accurate.
## Screenshot 1/z is linear in screen space.Z-buffering using 1/z is the technique used for the 3D objects in this tutorial. Because you're using 16-bit integer values for the depth, you'll want to keep the depth between Short.MAX_VALUE / z Using this formula means that the greater the value is, the closer the depth is; ## Listing 9.1 ZBuffer.javapackage com.brackeen.javagamebook.graphics3D; /** The ZBuffer class implements a z-buffer, or depth-buffer, that records the depth of every pixel in a 3D view window. The value recorded for each pixel is the inverse of the depth (1/z), so there is higher precision for close objects and a lower precision for far-away objects (where high depth precision is not as visually important). */ public class ZBuffer { private short[] depthBuffer; private int width; private int height; /** Creates a new z-buffer with the specified width and height. */ public ZBuffer(int width, int height) { depthBuffer = new short[width*height]; this.width = width; this.height = height; clear(); } /** Gets the width of this z-buffer. */ public int getWidth() { return width; } /** Gets the height of this z-buffer. */ public int getHeight() { return height; } /** Clears the z-buffer. All depth values are set to 0. */ public void clear() { for (int i=0; i<depthBuffer.length; i++) { depthBuffer[i] = 0; } } /** Sets the depth of the pixel at specified offset, overwriting its current depth. */ public void setDepth(int offset, short depth) { depthBuffer[offset] = depth; } /** Checks the depth at the specified offset, and if the specified depth is lower (is greater than or equal to the current depth at the specified offset), then the depth is set and this method returns true. Otherwise, no action occurs and this method returns false. */ public boolean checkDepth(int offset, short depth) { if (depth >= depthBuffer[offset]) { depthBuffer[offset] = depth; return true; } else { return false; } } } Note that the ## Calculating the Z-DepthNext, you'll get the equations for calculating the depth for every point in the polygon. Actually, you already have the equation. As you might recall from , "Texture Mapping and Lighting," you needed P P Here, d is the distance from the camera to the view window, U and V are the texture x and y directions, and O is the texture origin (or any point on the polygon's plane). So, you're all set. (Aren't you glad you derived the equations for texture mapping in that chapter?) The inverse of this equation becomes this: 1/z = (UxV•W)/(d(UxV•O)) Most of this you can calculate for the entire polygon rather than for every pixel: float w = SCALE * MIN_DISTANCE * Short.MAX_VALUE / (viewWindow.getDistance() * c.getDotProduct(textureBounds.getOrigin())); Remember that the vector C is set to UxV, so you don't have to recalculate UxV. In the equation, multiply the value by float z = c.getDotProduct(viewPos); float dz = c.x; ... int depth = (int)(w*z); int dDepth = (int)(w*dz); Then, for every pixel in the scan, check the depth, set the pixel if the depth is set, and increment the depth: ... if (zBuffer.checkDepth(offset, (short)(depth >> SCALE_BITS))) { doubleBufferData[offset] = texture.getColor(tx >> SCALE_BITS, ty >> SCALE_BITS); } depth+=dDepth; ... That's it for drawing with the z-buffer. We sum up the process of calculating the depth for every pixel in a polygon in the public void startFrame(Graphics2D g) { super.startFrame(g); // initialize depth buffer if (zBuffer == null || zBuffer.getWidth() != viewWindow.getWidth() || zBuffer.getHeight() != viewWindow.getHeight()) { zBuffer = new ZBuffer( viewWindow.getWidth(), viewWindow.getHeight()); } else if (clearViewEveryFrame) { zBuffer.clear(); } } That's it for z-buffering. Now we move on to actually taking advantage of it by creating 3D objects that move around the world. |

Previous Next |