ImageFilter
Image filters provide another way to modify images. An ImageFilter
is used in conjunction with a FilteredImageSource
object. The ImageFilter
, which implements ImageConsumer
(and Cloneable
), receives data from an ImageProducer
and modifies it; the FilteredImageSource
, which implements ImageProducer
, sends the modified data to the new consumer. As Figure 12.1 shows, an image filter sits between the original ImageProducer
and the ultimate ImageConsumer
.
The ImageFilter
class implements a "null" filter that does nothing to the image. To modify an image, you must use a subclass of ImageFilter
, by either writing one yourself or using a subclass provided with AWT, like the CropImageFilter
. Another ImageFilter
subclass provided with AWT is the RGBImageFilter
; it is useful for filtering an image on the basis of a pixel's color. Unlike the CropImageFilter
, RGBImageFilter
is an abstract class, so you need to create your own subclass to use it. Java 1.1 introduces two more image filters, AreaAveragingScaleFilter
and ReplicateScaleFilter
. Other filters must be created by subclassing ImageFilter
and providing the necessary methods to modify the image as necessary.
ImageFilter
s tend to work on a pixel-by-pixel basis, so large Image
objects can take a considerable amount of time to filter, depending on the complexity of the filtering algorithm. In the simplest case, filters generate new pixels based upon the color value and location of the original pixel. Such filters can start delivering data before they have loaded the entire image. More complex filters may use internal buffers to store an intermediate copy of the image so the filter can use adjacent pixel values to smooth or blend pixels together. These filters may need to load the entire image before they can deliver any data to the ultimate consumer.
To use an ImageFilter
, you pass it to the FilteredImageSource
constructor, which serves as an ImageProducer
to pass the new pixels to their consumer. The following code runs the image logo.jpg through an image filter, SomeImageFilter
, to produce a new image. The constructor for SomeImageFilter
is called within the constructor for FilteredImageSource
, which in turn is the only argument to createImage()
.
Image image = getImage (new URL ( "http://www.ora.com/images/logo.jpg")); Image newOne = createImage (new FilteredImageSource (image.getSource(), new SomeImageFilter()));
ImageFilter Methods
Variables- protected ImageConsumer consumer;
- The actual
ImageConsumer
for the image. It is initialized automatically for you by thegetFilterInstance()
method.
- public ImageFilter ()
- The only constructor for
ImageFilter
is the default one, which takes no arguments. Subclasses can provide their own constructors if they need additional information.
- public void setDimensions (int width, int height)
- The
setDimensions()
method ofImageFilter
is called when thewidth
andheight
of the original image are known. It callsconsumer.setDimensions()
to tell the next consumer the dimensions of the filtered image. If you subclassImageFilter
and your filter changes the image's dimensions, you should override this method to compute and report the new dimensions. - public void setProperties (Hashtable properties)
- The
setProperties()
method is called to provide the image filter with the property list for the original image. The image filter adds the propertyfilters
to the list and passes it along to the next consumer. The value given for thefilters
property is the result of the image filter'stoString()
method; that is, theString
representation of the current filter. Iffilters
is already set, information about thisImageFilter
is appended to the end. Subclasses ofImageFilter
may add other properties. - public void setColorModel (ColorModel model)
- The
setColorModel()
method is called to give theImageFilter
the color model used for most of the pixels in the original image. It passes this color model on to the next consumer. Subclasses may override this method if they change the color model. - public void setHints (int hints)
- The
setHints()
method is called to give theImageFilter
hints about how the producer will deliver pixels. This method passes the same set of hints to the next consumer. Subclasses must override this method if they need to provide different hints; for example, if they are delivering pixels in a different order. - public void setPixels (int x, int y, int width, int height, ColorModel model, byte pixels[], int offset, int scansize)
public void setPixels (int x, int y, int width, int height, ColorModel model, int pixels[], int offset, int scansize) - The
setPixels()
method receives pixel data from theImageProducer
and passes all the information on to theImageConsumer
. (x
,y
) is the top left corner of the bounding rectangle for the pixels. The bounding rectangle has sizewidth
xheight
. TheColorModel
for the new image ismodel
.pixels
is the byte or integer array of the pixel information, starting atoffset
(usually 0), with scan lines of sizescansize
(usuallywidth
). - public void imageComplete (int status)
- The
imageComplete()
method receives the completionstatus
from theImageProducer
and passes it along to theImageConsumer
.If you subclass
ImageFilter
, you will probably override thesetPixels()
methods. For simple filters, you may be able to modify the pixel array and deliver the result toconsumer.setPixels()
immediately. For more complex filters, you will have to build a buffer containing the entire image; in this case, the call toimageComplete()
will probably trigger filtering and pixel delivery.
- public Object clone ()
- The
clone()
method creates a clone of theImageFilter
. ThegetFilterInstance()
function uses this method to create a copy of theImageFilter
. Cloning allows the same filter instance to be used with multipleImage
objects.
- public ImageFilter getFilterInstance (ImageConsumer ic)
FilteredImageSource
callsgetFilterInstance()
to registeric
as theImageConsumer
for an instance of this filter; to do so, it sets the instance variableconsumer
. In effect, this method inserts theImageFilter
between the image's producer and the consumer. You have to override this method only if there are special requirements for the insertion process. This default implementation just callsclone()
.- public void resendTopDownLeftRight (ImageProducer ip)
- The
resendTopDownLeftRight()
method tells theImageProducer
ip
to try to resend the image data in the top-down, left-to-right order. If you override this method and yourImageFilter
has saved the image data internally, you may want yourImageFilter
to resend the data itself, rather than asking theImageProducer
. Otherwise, your subclass may ignore the request or pass it along to theImageProducer
ip
.
When you subclass ImageFilter
, there are very few restrictions on what you can do. We will create a few subclasses that show some of the possibilities. This ImageFilter
generates a new pixel by averaging the pixels around it. The result is a blurred version of the original. To implement this filter, we have to save all the pixel data into a buffer; we can't start delivering pixels until the entire image is in hand. Therefore, we override setPixels()
to build the buffer; we override imageComplete()
to produce the new pixels and deliver them.
Before looking at the code, here are a few hints about how the filter works; it uses a few tricks that may be helpful in other situations. We need to provide two versions of setPixels()
: one for integer arrays, and the other for byte arrays. To avoid duplicating code, both versions call a single method, setThePixels()
, which takes an Object
as an argument, instead of a pixel array; thus it can be called with either kind of pixel array. Within the method, we check whether the pixels argument is an instance of byte[]
or int[]
. The body of this method uses another trick: when it reads the byte[]
version of the pixel array, it ANDs the value with 0xff. This prevents the byte value, which is signed, from being converted to a negative int
when used as an argument to cm.getRGB()
.
The logic inside of imageComplete()
gets a bit hairy. This method does the actual filtering, after all the data has arrived. Its job is basically simple: compute an average value of the pixel and the eight pixels surrounding it (i.e., a 3x3 rectangle with the current pixel in the center). The problem lies in taking care of the edge conditions. We don't always want to average nine pixels; in fact, we may want to average as few as four. The if
statements figure out which surrounding pixels should be included in the average. The pixels we care about are placed in sumArray[]
, which has nine elements. We keep track of the number of elements that have been saved in the variable sumIndex
and use a helper method, avgPixels()
, to compute the average. The code might be a little cleaner if we used a Vector
, which automatically counts the number of elements it contains, but it would probably be much slower.
Example 12.7 shows the code for the blurring filter.
Example 12.7: Blur Filter Source
import java.awt.*; import java.awt.image.*; public class BlurFilter extends ImageFilter { private int savedWidth, savedHeight, savedPixels[]; private static ColorModel defaultCM = ColorModel.getRGBdefault(); public void setDimensions (int width, int height) { savedWidth=width; savedHeight=height; savedPixels=new int [width*height]; consumer.setDimensions (width, height); }
We override setDimensions()
to save the original image's height and width, which we use later.
public void setColorModel (ColorModel model) { // Change color model to model you are generating consumer.setColorModel (defaultCM); } public void setHints (int hintflags) { // Set new hints, but preserve SINGLEFRAME setting consumer.setHints (TOPDOWNLEFTRIGHT | COMPLETESCANLINES | SINGLEPASS | (hintflags & SINGLEFRAME)); }
This filter always generates pixels in the same order, so it sends the hint flags TOPDOWNLEFTRIGHT
, COMPLETESCANLINES
, and SINGLEPASS
to the consumer, regardless of what the image producer says. It sends the SINGLEFRAME
hint only if the producer has sent it.
private void setThePixels (int x, int y, int width, int height, ColorModel cm, Object pixels, int offset, int scansize) { int sourceOffset = offset; int destinationOffset = y * savedWidth + x; boolean bytearray = (pixels instanceof byte[]); for (int yy=0;yy<height;yy++) { for (int xx=0;xx<width;xx++) if (bytearray) savedPixels[destinationOffset++]= cm.getRGB(((byte[])pixels)[sourceOffset++]&0xff); else savedPixels[destinationOffset++]= cm.getRGB(((int[])pixels)[sourceOffset++]); sourceOffset += (scansize - width); destinationOffset += (savedWidth - width); } }
setThePixels()
saves the pixel data for the image in the array savedPixels[]
. Both versions of setPixels()
call this method. It doesn't pass the pixels along to the image consumer, since this filter can't process the pixels until the entire image is available.
public void setPixels (int x, int y, int width, int height, ColorModel cm, byte pixels[], int offset, int scansize) { setThePixels (x, y, width, height, cm, pixels, offset, scansize); } public void setPixels (int x, int y, int width, int height, ColorModel cm, int pixels[], int offset, int scansize) { setThePixels (x, y, width, height, cm, pixels, offset, scansize); } public void imageComplete (int status) { if ((status == IMAGEABORTED) || (status == IMAGEERROR)) { consumer.imageComplete (status); return; } else { int pixels[] = new int [savedWidth]; int position, sumArray[], sumIndex; sumArray = new int [9]; // maxsize - vs. Vector for performance for (int yy=0;yy<savedHeight;yy++) { position=0; int start = yy * savedWidth; for (int xx=0;xx<savedWidth;xx++) { sumIndex=0; // xx yy sumArray[sumIndex++] = savedPixels[start+xx]; // center center if (yy != (savedHeight-1)) // center bottom sumArray[sumIndex++] = savedPixels[start+xx+savedWidth]; if (yy != 0) // center top sumArray[sumIndex++] = savedPixels[start+xx-savedWidth]; if (xx != (savedWidth-1)) // right center sumArray[sumIndex++] = savedPixels[start+xx+1]; if (xx != 0) // left center sumArray[sumIndex++] = savedPixels[start+xx-1]; if ((yy != 0) && (xx != 0)) // left top sumArray[sumIndex++] = savedPixels[start+xx-savedWidth-1]; if ((yy != (savedHeight-1)) && (xx != (savedWidth-1))) // right bottom sumArray[sumIndex++] = savedPixels[start+xx+savedWidth+1]; if ((yy != 0) && (xx != (savedWidth-1))) //right top sumArray[sumIndex++] = savedPixels[start+xx-savedWidth+1]; if ((yy != (savedHeight-1)) && (xx != 0)) //left bottom sumArray[sumIndex++] = savedPixels[start+xx+savedWidth-1]; pixels[position++] = avgPixels(sumArray, sumIndex); } consumer.setPixels (0, yy, savedWidth, 1, defaultCM, pixels, 0, savedWidth); } consumer.imageComplete (status); } }
imageComplete()
does the actual filtering after the pixels have been delivered and saved. If the producer reports that an error occurred, this method passes the error flags to the consumer and returns. If not, it builds a new array, pixels[]
, which contains the filtered pixels, and delivers these to the consumer.
Previously, we gave an overview of how the filtering process works. Here are some details. (xx
, yy
) represents the current point's x
and y
coordinates. The point (xx
, yy
) must always fall within the image; otherwise, our loops are constructed incorrectly. Therefore, we can copy (xx
, yy
) into the sumArray[]
for averaging without any tests. For the point's eight neighbors, we check whether the neighbor falls in the image; if so, we add it to sumArray[]
. For example, the point just below (xx
, yy
) is at the bottom center of the 3x3 rectangle of points we are averaging. We know that xx
falls within the image; yy
falls within the image if it doesn't equal savedHeight-1
. We do similar tests for the other points.
Even though we're working with a rectangular image, our arrays are all one-dimensional so we have to convert a coordinate pair (xx
, yy
) into a single array index. To help us do the tutorialkeeping, we use the local variable start
to keep track of the start of the current scan line. Then start
+ xx
is the current point; start
+ xx
+ savedWidth
is the point immediately below; start
+ xx
+ savedWidth-1
is the point below and to the left; and so on.
avgPixels()
is our helper method for computing the average value that we assign to the new pixel. For each pixel in the pixels[]
array, it extracts the red, blue, green, and alpha components; averages them separately, and returns a new ARGB value.
private int avgPixels (int pixels[], int size) { float redSum=0, greenSum=0, blueSum=0, alphaSum=0; for (int i=0;i<size;i++) try { int pixel = pixels[i]; redSum += defaultCM.getRed (pixel); greenSum += defaultCM.getGreen (pixel); blueSum += defaultCM.getBlue (pixel); alphaSum += defaultCM.getAlpha (pixel); } catch (ArrayIndexOutOfBoundsException e) { System.out.println ("Ooops"); } int redAvg = (int)(redSum / size); int greenAvg = (int)(greenSum / size); int blueAvg = (int)(blueSum / size); int alphaAvg = (int)(alphaSum / size); return ((0xff << 24) | (redAvg << 16) | (greenAvg << 8) | (blueAvg << 0)); } }Producing many images from one: dynamic ImageFilter
The ImageFilter
framework is flexible enough to allow you to return a sequence of images based on an original. You can send back one frame at a time, calling the following when you are finished with each frame:
consumer.imageComplete(ImageConsumer.SINGLEFRAMEDONE);
After you have generated all the frames, you can tell the consumer that the sequence is finished with the STATICIMAGEDONE
constant. In fact, this is exactly what the new animation capabilities of MemoryImageSource
use.
In Example 12.8, the DynamicFilter
lets the consumer display an image. After the image has been displayed, the filter gradually overwrites the image with a specified color by sending additional image frames. The end result is a solid colored rectangle. Not too exciting, but it's easy to imagine interesting extensions: you could use this technique to implement a fade from one image into another. The key points to understand are:
- This filter does not override
setPixels()
, so it is extremely fast. In this case, we want the original image to reach the consumer, and there is no reason to save the image in a buffer. - Filtering takes place in the image-fetching thread, so it is safe to put the filter-processing thread to sleep if the image is coming from disk. If the image is in memory, filtering should not sleep because there will be a noticeable performance lag in your program if it does. The
DynamicFilter
class has a delay parameter to its constructor that lets you control this behavior. - This subclass overrides
setDimensions()
to save the image's dimensions for its own use. It needs to overridesetHints()
because it sends pixels to the consumer in a nonstandard order: it sends the original image, then goes back and starts sending overlays. Likewise, this subclass overridesresendTopDownLeftRight()
to do nothing because there is no way the originalImageProducer
can replace all the changes with the originalImage
. imageComplete()
is where all the fun happens. Take a special look at the status flags that are returned.
Example 12.8: DynamicFilter Source
import java.awt.*; import java.awt.image.*; public class DynamicFilter extends ImageFilter { Color overlapColor; int delay; int imageWidth; int imageHeight; int iterations; DynamicFilter (int delay, int iterations, Color color) { this.delay = delay; this.iterations = iterations; overlapColor = color; } public void setDimensions (int width, int height) { imageWidth = width; imageHeight = height; consumer.setDimensions (width, height); } public void setHints (int hints) { consumer.setHints (ImageConsumer.RANDOMPIXELORDER); } public void resendTopDownLeftRight (ImageProducer ip) { } public void imageComplete (int status) { if ((status == IMAGEERROR) || (status == IMAGEABORTED)) { consumer.imageComplete (status); return; } else { int xWidth = imageWidth / iterations; if (xWidth <= 0) xWidth = 1; int newPixels[] = new int [xWidth*imageHeight]; int iColor = overlapColor.getRGB(); for (int x=0;x<(xWidth*imageHeight);x++) newPixels[x] = iColor; int t=0; for (;t<(imageWidth-xWidth);t+=xWidth) { consumer.setPixels(t, 0, xWidth, imageHeight, ColorModel.getRGBdefault(), newPixels, 0, xWidth); consumer.imageComplete (ImageConsumer.SINGLEFRAMEDONE); try { Thread.sleep (delay); } catch (InterruptedException e) { e.printStackTrace(); } } int left = imageWidth-t; if (left > 0) { consumer.setPixels(imageWidth-left, 0, left, imageHeight, ColorModel.getRGBdefault(), newPixels, 0, xWidth); consumer.imageComplete (ImageConsumer.SINGLEFRAMEDONE); } consumer.imageComplete (STATICIMAGEDONE); } } }
The DynamicFilter
relies on the default setPixels()
method to send the original image to the consumer. When the original image has been transferred, the image producer calls this filter's imageComplete()
method, which does the real work. Instead of relaying the completion status to the consumer, imageComplete()
starts generating its own data: solid rectangles that are all in the overlapColor
specified in the constructor. It sends these rectangles to the consumer by calling consumer.setPixels()
. After each rectangle, it calls consumer.imageComplete()
with the SINGLEFRAMEDONE
flag, meaning that it has just finished one frame of a multi-frame sequence. When the rectangles have completely covered the image, the method imageComplete()
finally notifies the consumer that the entire image sequence has been transferred by sending the STATICIMAGEDONE
flag.
The following code is a simple applet that uses this image filter to produce a new image:
import java.applet.*; import java.awt.*; import java.awt.image.*; public class DynamicImages extends Applet { Image i, j; public void init () { i = getImage (getDocumentBase(), "rosey.jpg"); j = createImage (new FilteredImageSource (i.getSource(), new DynamicFilter(250, 10, Color.red))); } public void paint (Graphics g) { g.drawImage (j, 10, 10, this); } }
One final curiosity: the DynamicFilter
doesn't make any assumptions about the color model used for the original image. It sends its overlays with the default RGB color model. Therefore, this is one case in which an ImageConsumer
may see calls to setPixels()
that use different color models.
RGBImageFilter
RGBImageFilter
is an abstract subclass of ImageFilter
that provides a shortcut for building the most common kind of image filters: filters that independently modify the pixels of an existing image, based only on the pixel's position and color. Because RGBImageFilter
is an abstract class, you must subclass it before you can do anything. The only method your subclass must provide is filterRGB()
, which produces a new pixel value based on the original pixel and its location. A handful of additional methods are in this class; most of them provide the behind-the-scenes framework for funneling each pixel through the filterRGB()
method.
If the filtering algorithm you are using does not rely on pixel position (i.e., the new pixel is based only on the old pixel's color), AWT can apply an optimization for images that use an IndexColorModel
: rather than filtering individual pixels, it can filter the image's color map. In order to tell AWT that this optimization is okay, add a constructor to the class definition that sets the canFilterIndexColorModel
variable to true
. If canFilterIndexColorMode
l is false
(the default) and an IndexColorModel
image is sent through the filter, nothing happens to the image. Variables
- protected boolean canFilterIndexColorModel
- Setting the
canFilterIndexColorModel
variable permits theImageFilter
to filterIndexColorModel
images. The default value isfalse
. When this variable isfalse
,IndexColorModel
images are not filtered. When this variable istrue
, theImageFilter
filters the colormap instead of the individual pixel values. - protected ColorModel newmodel
- The
newmodel
variable is used to store the newColorModel
whencanFilterIndexColorModel
istrue
and theColorModel
actually is of typeIndexColorModel
. Normally, you do not need to access this variable, even in subclasses. - protected ColorModel origmodel
- The
origmodel
variable stores the original color model when filtering anIndexColorModel
. Normally, you do not need to access this variable, even in subclasses.
- public RGBImageFilter ()--called by subclass
- The only constructor for
RGBImageFilter
is the implied constructor with no parameters. In most subclasses ofRGBImageFilter
, the constructor has to initialize only thecanFilterIndexColorModel
variable.
- public void setColorModel (ColorModel model)
- The
setColorModel()
method changes theColorModel
of the filter tomodel
. IfcanFilterIndexColorModel
istrue
andmodel
is of typeIndexColorModel
, a filtered version ofmodel
is used instead. - public void setPixels (int x, int y, int w, int h, ColorModel model, byte pixels[], int off, int scansize)
public void setPixels (int x, int y, int w, int h, ColorModel model, int pixels[], int off, int scansize) - If necessary, the
setPixels()
method converts thepixels
buffer to the default RGBColorModel
and then filters them withfilterRGBPixels()
. Ifmodel
has already been converted, this method just passes the pixels along to the consumer'ssetPixels()
.
The only method you care about here is filterRGB()
. All subclasses of RGBImageFilter must override this method. It is very difficult to imagine situations in which you would override (or even call) the other methods in this group. They are helper methods that funnel pixels through filterRGB()
.
- public void substituteColorModel (ColorModel oldModel, ColorModel newModel)
substituteColorModel()
is a helper method forsetColorModel()
. It initializes the protected variables ofRGBImageFilter
. Theorigmodel
variable is set tooldModel
and thenewmodel
variable is set tonewModel
.- public IndexColorModel filterIndexColorModel (IndexColorModel icm)
filterIndexColorModel()
is another helper method forsetColorModel()
. It runs the entire color table oficm
throughfilterRGB()
and returns the filteredColorModel
for use bysetColorModel()
.- public void filterRGBPixels (int x, int y, int width, int height, int pixels[], int off, int scansize)
filterRGBPixels()
is a helper method forsetPixels()
. It filters each element of thepixels
buffer throughfilterRGB()
, converting pixels to the default RGBColorModel
first. This method changes the values in thepixels
array.- public abstract int filterRGB (int x, int y, int rgb)
filterRGB()
is the one method thatRGBImageFilter
subclasses must implement. The method takes thergb
pixel value at position (x
,y
) and returns the converted pixel value in the default RGBColorModel
. Coordinates of (-1, -1) signify that a color table entry is being filtered instead of a pixel.
Creating your own RGBImageFilter
is fairly easy. One of the more common applications for an RGBImageFilter
is to make images transparent by setting the alpha component of each pixel. To do so, we extend the abstract RGBImageFilter
class. The filter in Example 12.9 makes the entire image translucent, based on a percentage passed to the class constructor. Filtering is independent of position, so the constructor can set the canFilterIndexColorModel
variable. A constructor with no arguments uses a default alpha value of 0.75.
Example 12.9: TransparentImageFilter Source
import java.awt.image.*; class TransparentImageFilter extends RGBImageFilter { float alphaPercent; public TransparentImageFilter () { this (0.75f); } public TransparentImageFilter (float aPercent) throws IllegalArgumentException { if ((aPercent < 0.0) || (aPercent > 1.0)) throw new IllegalArgumentException(); alphaPercent = aPercent; canFilterIndexColorModel = true; } public int filterRGB (int x, int y, int rgb) { int a = (rgb >> 24) & 0xff; a *= alphaPercent; return ((rgb & 0x00ffffff) | (a << 24)); } }
CropImageFilter
The CropImageFilter
is an ImageFilter
that crops an image to a rectangular region. When used with FilteredImageSource
, it produces a new image that consists of a portion of the original image. The cropped region must be completely within the original image. It is never necessary to subclass this class. Also, using the 10 or 11 argument version of Graphics.drawImage()
introduced in Java 1.1 precludes the need to use this filter, unless you need to save the resulting cropped image.
If you crop an image and then send the result through a second ImageFilter
, the pixel array received by the filter will be the size of the original Image
, with the offset
and scansize
set accordingly. The width
and height
are set to the cropped values; the result is a smaller Image
with the same amount of data. CropImageFilter
keeps the full pixel array around, partially empty. Constructors
- public CropImageFilter (int x, int y, int width, int height)
- The constructor for
CropImageFilter
specifies the rectangular area of the old image that makes up the new image. The (x
,y
) coordinates specify the top left corner for the cropped image;width
andheight
must be positive or the resulting image will be empty. If the (x
,y
) coordinates are outside the original image area, the resulting image is empty. If (x
,y
) starts within the image but the rectangular area of sizewidth
xheight
goes beyond the original image, the part that extends outside will be black. (Remember the color black has pixel values of 0 for red, green, and blue.)
- public void setProperties (Hashtable properties)
- The
setProperties()
method adds thecroprect
image property to the properties list. The boundingRectangle
, specified by the (x
,y
) coordinates andwidth
xheight
size, is associated with this property. After updatingproperties
, this method sets the properties list of the consumer. - public void setDimensions (int width, int height)
- The
setDimensions()
method ofCropImageFilter
ignores thewidth
andheight
parameters to the function call. Instead, it relies on the size parameters in the constructor. - public void setPixels (int x, int y, int w, int h, ColorModel model, byte pixels[], int offset, int scansize)
public void setPixels (int x, int y, int w, int h, ColorModel model, int pixels[], int offset, int scansize) - These
setPixels()
methods check to see what portion of thepixels
array falls within the cropped area and pass those pixels along.
Example 12.10 uses a CropImageFilter
to extract the center third of a larger image. No subclassing is needed; the CropImageFilter
is complete in itself. The output is displayed in Figure 12.7.
Example 12.10: Crop Applet Source
import java.applet.*; import java.awt.*; import java.awt.image.*; public class Crop extends Applet { Image i, j; public void init () { MediaTracker mt = new MediaTracker (this); i = getImage (getDocumentBase(), "rosey.jpg"); mt.addImage (i, 0); try { mt.waitForAll(); int width = i.getWidth(this); int height = i. getHeight(this); j = createImage (new FilteredImageSource (i.getSource(), new CropImageFilter (width/3, height/3, width/3, height/3))); } catch (InterruptedException e) { e.printStackTrace(); } } public void paint (Graphics g) { g.drawImage (i, 10, 10, this); // regular if (j != null) { g.drawImage (j, 10, 90, this); // cropped } } }
Figure 12.7: Image cropping example output.
TIP:
You can use CropImageFilter
to help improve your animation performance or just the general download time of images. Without CropImageFilter
, you can use Graphics.clipRect()
to clip each image of an image strip when drawing. Instead of clipping each Image
(each time), you can use CropImageFilter
to create a new Image
for each cell of the strip. Or for times when an image strip is inappropriate, you can put all your images within one image file (in any order whatsoever), and use CropImageFilter
to get each out as an Image
.
ReplicateScaleFilter
Back in Simple Graphics we introduced you to the getScaledInstance()
method. This method uses a new image filter that is provided with Java 1.1. The ReplicateScaleFilter
and its subclass, AreaAveragingScaleFilter
, allow you to scale images before calling drawImage()
. This can greatly speed your programs because you don't have to wait for the call to drawImage()
before performing scaling.
The ReplicateScaleFilter
is an ImageFilter
that scales by duplicating or removing rows and columns. When used with FilteredImageSource
, it produces a new image that is a scaled version of the original. As you can guess, ReplicateScaleFilter
is very fast, but the results aren't particularly pleasing aesthetically. It is great if you want to magnify a checkerboard but not that useful if you want to scale an image of your Aunt Polly. Its subclass, AreaAveragingScaleFilter
, implements a more time-consuming algorithm that is more suitable when image quality is a concern. Constructor
- public ReplicateScaleFilter (int width, int height)
- The constructor for
ReplicateScaleFilter
specifies the size of the resulting image. If either parameter is -1, the resulting image maintains the same aspect ratio as the original image.
- public void setProperties (Hashtable properties)
- The
setProperties()
method adds therescale
image property to the properties list. The value of the rescale property is a quoted string showing the image's new width and height, in the form`<width>x<height>`
, where the width and height are taken from the constructor. After updatingproperties
, this method sets the properties list of the consumer. - public void setDimensions (int width, int height)
- The
setDimensions()
method ofReplicateScaleFilter
passes the new width and height from the constructor along to the consumer. If either of the constructor's parameters are negative, the size is recalculated proportionally. If both are negative, the size becomeswidth
xheight
. - public void setPixels (int x, int y, int w, int h, ColorModel model, int pixels[], int offset, int scansize)
public void setPixels (int x, int y, int w, int h, ColorModel model, byte pixels[], int offset, int scansize) - The
setPixels()
method ofReplicateScaleFilter
checks to see which rows and columns ofpixels
to pass along.
AreaAveragingScaleFilter
The AreaAveragingScaleFilter
subclasses ReplicateScaleFilter
to provide a better scaling algorithm. Instead of just dropping or adding rows and columns, AreaAveragingScaleFilter
tries to blend pixel values when creating new rows and columns. The filter works by replicating rows and columns to generate an image that is a multiple of the original size. Then the image is resized back down by an algorithm that blends the pixels around each destination pixel. AreaAveragingScaleFilter methods
Because this filter subclasses ReplicateScaleFilter
, the only methods it includes are those that override methods of ReplicateScaleFilter
. Constructors
- public AreaAveragingScaleFilter (int width, int height)
- The constructor for
AreaAveragingScaleFilter
specifies the size of the resulting image. If either parameter is -1, the resulting image maintains the same aspect ratio as the original image.
- public void setHints (int hints)
- The
setHints()
method ofAreaAveragingScaleFilter
checks to see if some optimizations can be performed based upon the value of thehints
parameter. If they can't, the image filter has to cache the pixel data until it receives the entire image. - public void setPixels (int x, int y, int w, int h, ColorModel model, byte pixels[], int offset, int scansize)
public void setPixels (int x, int y, int w, int h, ColorModel model, int pixels[], int offset, int scansize) - The
setPixels()
method ofAreaAveragingScaleFilter
accumulates the pixels or passes them along based upon the available hints. IfsetPixels()
accumulates the pixels, this filter passes them along to the consumer when appropriate.
Cascading Filters
It is often a good idea to perform complex filtering operations by using several filters in a chain. This technique requires the system to perform several passes through the image array, so it may be slower than using a single complex filter; however, cascading filters yield code that is easier to understand and quicker to write--particularly if you already have a collection of image filters from other projects.
For example, assume you want to make a color image transparent and then render the image in black and white. The easy way to do this task is to apply a filter that converts color to a gray value and then apply the TransparentImageFilter
we developed in Example 12.9. Using this strategy, we have to develop only one very simple filter. Example 12.11 shows the source for the GrayImageFilter
; Example 12.12 shows the applet that applies the two filters in a daisy chain.
Example 12.11: GrayImageFilter Source
import java.awt.image.*; public class GrayImageFilter extends RGBImageFilter { public GrayImageFilter () { canFilterIndexColorModel = true; } public int filterRGB (int x, int y, int rgb) { int gray = (((rgb & 0xff0000) >> 16) + ((rgb & 0x00ff00) >> 8) + (rgb & 0x0000ff)) / 3; return (0xff000000 | (gray << 16) | (gray << 8) | gray); } }
Example 12.12: DrawingImages Source
import java.applet.*; import java.awt.*; import java.awt.image.*; public class DrawingImages extends Applet { Image i, j, k, l; public void init () { i = getImage (getDocumentBase(), "rosey.jpg"); GrayImageFilter gif = new GrayImageFilter (); j = createImage (new FilteredImageSource (i.getSource(), gif)); TransparentImageFilter tf = new TransparentImageFilter (.5f); k = createImage (new FilteredImageSource (j.getSource(), tf)); l = createImage (new FilteredImageSource (i.getSource(), tf)); } public void paint (Graphics g) { g.drawImage (i, 10, 10, this); // regular g.drawImage (j, 270, 10, this); // gray g.drawImage (k, 10, 110, Color.red, this); // gray - transparent g.drawImage (l, 270, 110, Color.red, this); // transparent } }
Granted, neither the GrayImageFilter
or the TransparentImageFilter
are very complex, but consider the savings you would get if you wanted to blur an image, crop it, and then render the result in grayscale. Writing a filter that does all three is not a task for the faint of heart; remember, you can't subclass RGBImageFilter
or CropImageFilter
because the result does not depend purely on each pixel's color and position. However, you can solve the problem easily by cascading the filters developed in this chapter.