NIO Package

The java.nio package was introduced in Java 1.4. The name NIO stands for "new I/O," which may seem to imply that it is to be a replacement for the java.io package. In fact, much of the NIO functionality overlaps with existing APIs. NIO was added primarily to address specific issues of scalability for large systems, especially in networked apps. That said, NIO also provides several new features Java lacked in basic I/O, so you'll want to look at some tools here even if you aren't planning to write any large or high-performance services. The following sections outline the primary features of NIO.

Asynchronous I/O

Most of the need for the NIO package was driven by the desire to add nonblocking and selectable I/O to Java. Prior to NIO, most read and write operations in Java were bound to threads and were forced to block for unpredictable amounts of time. Although certain APIs such as Sockets (which we'll see in ) provided specific means to limit how long an I/O call could take, this was a workaround to compensate for the lack of a more general mechanism. Prior to the introduction of threads, in many languages I/O could still be done efficiently by setting I/O streams to a nonblocking mode and testing them for their readiness to send or receive data. In a nonblocking mode, a read or write does only as much work as can be done immediatelyfilling or emptying a buffer and then returning. Combined with the ability to test for readiness, this allows a single thread to continuously service many channels efficiently. The main thread "selects" a stream that is ready and works with it until it blocks and then moves to another. On a single-processor system, this is fundamentally equivalent to using multiple threads. Even now, this style of processing has scalability advantages when using a pool of threads (rather than just one). We'll discuss this in detail in when we discuss networking and building servers that can handle many clients simultaneously. In addition to nonblocking and selectable I/O, the NIO package enables closing and interrupting I/O operations asynchronously. As discussed in , prior to NIO there was no reliable way to stop or wake up a thread blocked in an I/O operation. With NIO, threads blocked in I/O operations always wake up when interrupted or when the channel is closed by anyone. Additionally, if you interrupt a thread while it is blocked in an NIO operation, its channel is automatically closed. (Closing the channel because the thread is interrupted might seem too strong, but usually it's the right thing to do.)

Performance

Channel I/O is designed around the concept of buffers, which are a sophisticated form of array, tailored to working with communications. The NIO package supports the concept of direct buffersbuffers that maintain their memory outside the Java VM in the host operating system. Since all real I/O operations ultimately have to work with the host OS, by maintaining the buffer space there, some operations can be made much more efficient. Data can be transferred without first copying it into Java and back out.

Mapped and Locked Files

NIO provides two general-purpose file-related features not found in java.io: memory-mapped files and file locking. We'll discuss memory-mapped files later, but suffice it to say that they allow you to work with file data as if it were all magically resident in memory. File locking supports the concept of shared and exclusive locks on regions of filesuseful for concurrent access by multiple apps.

Channels

While java.io deals with streams, java.nio works with channels. A channel is an endpoint for communication. Although in practice channels are similar to streams, the underlying notion of a channel is more abstract and primitive. Whereas streams in java.io are defined in terms of input or output with methods to read and write bytes, the basic channel interface says nothing about how communications happen. It simply defines whether the channel is open or closed via the methods isOpen( ) and close( ). Implementations of channels for files, network sockets, or arbitrary devices then add their own methods for operations, such as reading, writing, or transferring data. The following channels are provided by NIO:

  • FileChannel
  • Pipe.SinkChannel, Pipe.SourceChannel
  • SocketChannel, ServerSocketChannel, DatagramChannel

We'll cover FileChannel in this chapter. The Pipe channels are simply the channel equivalents of the java.io Pipe facilities. We'll talk about Socket and Datagram channels in . All these basic channels implement the ByteChannel interface, designed for channels that have read and write methods such as I/O streams. ByteChannels read and write ByteBuffers, however, not byte arrays. In addition to these native channels, you can bridge channels with java.io I/O streams and readers and writers for interoperability. However, if you mix these features, you may not get the full benefits and performance offered by the NIO package.

Buffers

Most of the utilities of the java.io and java.net packages operate on byte arrays. The corresponding tools of the NIO package are built around ByteBuffers (with another type of buffer, CharBuffer, serving the text world). Byte arrays are simple, so why are buffers necessary? They serve several purposes:

  • They formalize the usage patterns for buffered data and they provide for things like read-only buffers and keep track of read/write positions and limits within a large buffer space. They also provide a mark/reset facility such as that of BufferedInputStream.
  • They provide additional APIs for working with raw data representing primitive types. You can create buffers that "view" your byte data as a series of larger primitives, such as shorts, ints, or floats. The most general type of data buffer, ByteBuffer, includes methods that let you read and write all primitive types just like DataOutputStream does for streams.
  • They abstract the underlying storage of the data, allowing for special optimizations by Java. Specifically, buffers may be allocated as direct buffers that use native buffers of the host operating system instead of arrays in Java's memory. The NIO Channel facilities that work with buffers can recognize direct buffers automatically and try to optimize I/O to use them. For example, a read from a file channel into a Java byte array normally requires Java to copy the data for the read from the host operating system into Java's memory. With a direct buffer, the data can remain in the host operating system, outside Java's normal memory space.
Buffer operations

A Buffer is a subclass of a java.nio.Buffer object. The base Buffer class is something like an array with state. It does not specify what type of elements it holds (that is for subtypes to decide), but it does define functionality common to all data buffers. A Buffer has a fixed size called its capacity. Although all the standard Buffers provide "random access" to their contents, a Buffer expects to be read and written sequentially, so Buffers maintain the notion of a position where the next element is read or written. In addition to position, a Buffer can maintain two other pieces of state information: a limit, which is a position that is a "soft" limit to the extent of a read or write, and a mark, which can be used to remember an earlier position for future recall. Implementations of Buffer add specific, typed get and put methods that read and write the buffer contents. For example, ByteBuffer is a buffer of bytes and it has get( ) and put( ) methods that read and write bytes and arrays of bytes (along with many other useful methods we'll discuss later). Getting from and putting to the Buffer changes the position marker, so the Buffer keeps track of its contents somewhat like a stream. Attempting to read or write past the limit marker generates a BufferUnderflowException or BufferOverflowException, respectively. The mark, position, limit, and capacity values always obey the formula: The position for reading and writing the Buffer is always between the mark, which serves as a lower bound, and the limit, which serves as an upper bound. The capacity represents the physical extent of the buffer space. You can set the position and limit markers explicitly with the position( ) and limit( ) methods. Several convenience methods are provided for the common usage patterns. The reset( ) method sets the position back to the mark. If no mark has been set, an InvalidMarkException is thrown. The clear( ) method resets the position to 0 and makes the limit the capacity, readying the buffer for new data (the mark is discarded). The clear( ) method does not actually do anything to the data in the buffer; it simply changes the position markers. The flip( ) method is used for the common pattern of writing data into the buffer and then reading it back out. flip makes the current position the limit and then resets the current position to 0 (any mark is thrown away), which saves having to keep track of how much data was read. Another method, rewind( ), simply resets the position to 0, leaving the limit alone. You might use it to write the same size data again. Here is a snippet of code that uses these methods to read data from a channel and write it to two channels:

 ByteBuffer buff = ...
 while ( inChannel.read( buff ) > 0 ) { // position = ?
 buff.flip( ); // limit = position; position = 0;
 outChannel.write( buff );
 buff.rewind( ); // position = 0
 outChannel2.write( buff );
 buff.clear( ); // position = 0; limit = capacity
 }


This might be confusing the first time you look at it because here the read from the Channel is actually a write to the Buffer and vice versa. Because this example writes all the available data up to the limit, either flip( ) or rewind( ) have the same effect in this case.

Buffer types

As stated earlier, various buffer types add get and put methods for reading and writing specific data types. Each of the Java primitive types has an associated buffer type: ByteBuffer, CharBuffer, ShortBuffer, IntBuffer, LongBuffer, FloatBuffer, and DoubleBuffer. Each provides get and put methods for reading and writing its type and arrays of its type. Of these, ByteBuffer is the most flexible. Because it has the "finest grain" of all the buffers, it has been given a full complement of get and put methods for reading and writing all the other data types as well as byte. Here are some ByteBuffer methods:

 byte get( )
 char getChar( )
 short getShort( )
 int getInt( )
 long getLong( )
 float getFloat( )
 double getDouble( )
 void put(byte b)
 void put(ByteBuffer src)
 void put(byte[] src, int offset, int length)
 void put(byte[] src)
 void putChar(char value)
 void putShort(short value)
 void putInt(int value)
 void putLong(long value)
 void putFloat(float value)
 void putDouble(double value)


As we said, all the standard buffers also support random access. For each of the aforementioned methods of ByteBuffer, an additional form takes an index:

 getLong( int index )
 putLong( int index, long value )


But that's not all. ByteBuffer can also provide "views" of itself as any of the coarse-grained types. For example, you can fetch a ShortBuffer view of a ByteBuffer with the asShortBuffer( ) method. The ShortBuffer view is backed by the ByteBuffer, which means that they work on the same data, and changes to either one affect the other. The view buffer's extent starts at the ByteBuffer's current position, and its capacity is a function of the remaining number of bytes, divided by the new type's size. (For example, shorts consume two bytes each, floats four, and longs and doubles take eight.) View buffers are convenient for reading and writing large blocks of a contiguous type within a ByteBuffer. CharBuffers are interesting as well, primarily because of their integration with Strings. Both CharBuffers and Strings implement the java.lang.CharSequence interface. This is the interface that provides the standard charAt( ) and length( ) methods. Because of this, newer APIs (such as the java.util.regex package) allow you to use a CharBuffer or a String interchangeably. In this case, the CharBuffer acts like a modifiable String with user-configurable start and end positions.

Byte order

Since we're talking about reading and writing types larger than a byte, the question arises: in what order do the bytes of multibyte values (e.g., shorts, ints) get written? There are two camps in this world: "big endian" and "little endian."[*] Big endian means that the most significant bytes come first; little endian is the reverse. If you're writing binary data for consumption by some native app, this is important. Intel-compatible computers use little endian, and many workstations that run Unix use big endian. The ByteOrder class encapsulates the choice. You can specify the byte order to use with the ByteArray order( ) method, using the identifiers ByteOrder.BIG_ENDIAN and ByteOrder.LITTLE_ENDIAN like so:

[*] The terms "big endian" and "little endian" come from Jonathan Swift's novel Gulliver's Travels, where it denoted two camps of Lilliputians: those who eat their eggs from the big end and those who eat them from the little end.

 byteArray.order( ByteOrder.BIG_ENDIAN );


You can retrieve the native ordering for your platform using the static ByteOrder.nativeOrder( ) method. (I know you're curious.)

Allocating buffers

You can create a buffer either by allocating it explicitly using allocate( ) or by wrapping an existing plain Java array type. Each buffer type has a static allocate( ) method that takes a capacity (size) and also a wrap( ) method that takes an existing array:

 CharBuffer cbuf = CharBuffer.allocate( 64*1024 );


A direct buffer is allocated in the same way, with the allocateDirect( ) method:

 ByteBuffer bbuf = ByteBuffer.allocateDirect( 64*1024 );


As we described earlier, direct buffers can use operating system memory structures that are optimized for use with some kinds of I/O operations. The tradeoff is that allocating a direct buffer is a little slower than a plain buffer, so you should try to use them for longer-term buffers. (For example, using Java 5.0 on a 400 MHz Sparc Ultra 60, it took about six milliseconds to allocate a 1 MB direct buffer versus two milliseconds for a plain buffer of the same size.)

Character Encoders and Decoders

Character encoders and decoders turn characters into raw bytes and vice versa, mapping from the Unicode standard to particular encoding schemes. Encoders and decoders have always existed in Java for use by Reader and Writer streams and in the methods of the String class that work with byte arrays. However, prior to Java 1.4, there was no API for working with encoding explicitly; you simply referred to encoders and decoders wherever necessary by name as a String. The java.nio.charset package formalizes the idea of a Unicode character set with the Charset class. The Charset class is a factory for Charset instances, which know how to encode character buffers to byte buffers and decode byte buffers to character buffers. You can look up a character set by name with the static Charset.forName( ) method and use it in conversions:

 Charset charset = Charset.forName("US-ASCII");
 CharBuffer charBuff = charset.decode( byteBuff ); // to ascii
 ByteBuffer byteBuff = charset.encode( charBuff ); // and back


You can also test to see if an encoding is available with the static Charset.isSupported( ) method. The following character sets are guaranteed to be supplied:

  • US-ASCII
  • ISO-8859-1
  • UTF-8
  • UTF-16BE
  • UTF-16LE
  • UTF-16

You can list all the encoders available on your platform using the static availableCharsets( ) method:

 Map map = Charset.availableCharsets( );
 Iterator it = map.keySet( ).iterator( );
 while ( it.hasNext( ) )
 System.out.println( it.next( ) );


The result of availableCharsets( ) is a map because character sets may have "aliases" and appear under more than one name. In addition to the buffer-oriented classes of the java.nio package, the InputStreamReader and OutputStreamWriter bridge classes of the java.io package have been updated to work with Charset as well. You can specify the encoding as a Charset object or by name.

CharsetEncoder and CharsetDecoder

You can get more control over the encoding and decoding process by creating an instance of CharsetEncoder or CharsetDecoder (a codec) with the Charset newEncoder( ) and newDecoder( ) methods. In the previous snippet, we assumed that all the data was available in a single buffer. More often, however, we might have to process data as it arrives in chunks. The encoder/decoder API allows for this by providing more general encode( ) and decode( ) methods that take a flag specifying whether more data is expected. The codec needs to know this because it might have been left hanging in the middle of a multibyte character conversion when the data ran out. If it knows that more data is coming, it does not throw an error on this incomplete conversion. In the following snippet, we use a decoder to read from a ByteBuffer bbuff and accumulate character data into a CharBuffer cbuff:

 CharsetDecoder decoder = Charset.forName("US-ASCII").newDecoder( );
 boolean done = false;
 while ( !done ) {
 bbuff.clear( );
 done = ( in.read( bbuff ) == -1 );
 bbuff.flip( );
 decoder.decode( bbuff, cbuff, done );
 }
 cbuff.flip( );
 // use cbuff. . .


Here, we look for the end of input condition on the in channel to set the flag done. The encode( ) and decode( ) methods also return a special result object, CoderResult, that can determine the progress of encoding. The methods isError( ), isUnderflow( ), and isOverflow( ) on the CoderResult specify why encoding stopped: for an error, a lack of bytes on the input buffer, or a full output buffer, respectively.

FileChannel

Now that we've covered the basics of channels and buffers, it's time to look at a real channel type. The FileChannel is the NIO equivalent of the java.io.RandomAccessFile, but it provides several basic new features in addition to some performance optimizations. Use a FileChannel in place of a plain java.io file stream if you wish to use file locking, memory-mapped file access, or highly optimized data transfer between files or between file and network channels. A FileChannel is constructed from a FileInputStream, FileOutputStream, or RandomAccessFile:

 FileChannel readOnlyFc = new FileInputStream("file.txt").getChannel( );
 FileChannel readWriteFc =
 new RandomAccessFile("file.txt", "rw").getChannel( );


FileChannels for file input and output streams are read-only or write-only, respectively. To get a read/write FileChannel, you must construct a RandomAccessFile with read/write permissions, as in the previous example. Using a FileChannel is just like a RandomAccessFile, but it works with ByteBuffer instead of byte arrays:

 bbuf.clear( );
 readOnlyFc.position( index );
 readOnlyFc.read( bbuf );
 bbuf.flip( );
 readWriteFc.write( bbuf );


You can control how much data is read and written either by setting buffer position and limit markers or using another form of read/write that takes a buffer starting position and length. You can also read and write to a random position using:

 readWriteFc.read( bbuf, index )
 readWriteFc.write( bbuf, index2 );


In each case, the actual number of bytes read or written depends on several factors. The operation tries to read or write to the limit of the buffer and the vast majority of the time that is what happens with local file access. The operation is guaranteed to block only until at least one byte has been processed. Whatever happens, the number of bytes processed is returned, and the buffer position is updated accordingly. This is one of the conveniences of working with buffers; they can manage the count for you. Like standard streams, the channel read( ) method returns -1 upon reaching the end of input. The size of the file is always available with the size( ) method. It can change if you write past the end of the file. Conversely, you can truncate the file to a specified length with the TRuncate( ) method.

Concurrent access

FileChannels are safe for use by multiple threads and guarantee that data "viewed" by them is consistent across channels in the same VM. No guarantees are made about how quickly writes are propagated to the storage mechanism. If you need to be sure that data is safe before moving on, you can use the force( ) method to flush changes to disk. The force( ) method takes a Boolean argument indicating whether or not file metadata, including timestamp and permissions, must be written. Some systems keep track of reads on files as well as writes, so you can save a lot of updates if you set the flag to false, which indicates that you don't care about syncing that data immediately. As with all Channels, a FileChannel may be closed by any thread. Once closed, all its read/write and position-related methods throw a ClosedChannelException.

File locking

FileChannels support exclusive and shared locks on regions of files through the lock( ) method:

 FileLock fileLock = fileChannel.lock( );
 int start = 0, len = fileChannel2.size( );
 FileLock readLock = fileChannel2.lock( start, len, true );


Locks may be either shared or exclusive. An exclusive lock prevents others from acquiring a lock of any kind on the specified file or file region. A shared lock allows others to acquire overlapping shared locks but not exclusive locks. These are useful as write and read locks, respectively. When you are writing, you don't want others to be able to write until you're done, but when reading, you need only to block others from writing, not reading concurrently. The simple lock( ) method in the previous example attempts to acquire an exclusive lock for the whole file. The second form accepts a starting and length parameter as well as a flag indicating whether the lock should be shared (or exclusive). The FileLock object returned by the lock( ) method can be used to release the lock:

 fileLock.release( );


Note that file locks are a cooperative API; they do not necessarily prevent anyone from reading or writing to the locked file contents. In general, the only way to guarantee that locks are obeyed is for both parties to attempt to acquire the lock and use it. Also, shared locks are not implemented on some systems, in which case all requested locks are exclusive. You can test whether a lock is shared with the isShared( ) method.

Memory-mapped files

One of the most interesting features offered through FileChannel is the ability to map a file into memory. When a file is memory-mapped, like magic it becomes accessible through a single ByteBufferas if the entire file was read into memory at once. The implementation of this is extremely efficient, generally among the fastest ways to access the data. For working with large files, memory mapping can save a lot of resources and time. This may seem counterintuitive; we're getting a conceptually easier way to access our data and it's also faster and more efficient? What's the catch? There really is no catch. The reason for this is that all modern operating systems are based on the idea of virtual memory. In a nutshell, that means the operating system makes disk space act like memory by continually paging (swapping 4-KB blocks called "pages") between memory and disk, transparent to the apps. Operating systems are very good at this; they efficiently cache the data the app is using and let go of what is not in use. Memory-mapping a file is really just taking advantage of what the OS is doing internally. A good example of where a memory-mapped file would be useful is in a database. Imagine a 100 MB file containing records indexed at various positions. By mapping the file, we can work with a standard ByteBuffer, reading and writing data at arbitrary positions and let the native operating system read and write the underlying data in fine-grained pages, as necessary. We could emulate this behavior with RandomAccessFile or FileChannel, but we would have to explicitly read and write data into buffers first, and the implementation would almost certainly not be as efficient. A mapping is created with the FileChannel map( ) method. For example:

 FileChannel fc = new RandomAccessFile("index.db", "rw").getChannel( );
 MappedByteBuffer mappedBuff =
 fc.map( FileChannel.MapMode.READ_WRITE, 0,
 fc.size( ) );


The map( ) method returns a MappedByteBuffer, which is simply the standard ByteBuffer with a few additional methods relating to the mapping. The most important is force( ), which ensures that any data written to the buffer is flushed out to permanent storage on the disk. The READ_ONLY and READ_WRITE constant identifiers of the FileChannel.MapMode static inner class specify the type of access. Read/write access is available only when mapping a read/write file channel. Data read through the buffer is always consistent within the same Java VM. It may also be consistent across apps on the same host machine, but this is not guaranteed. Again, a MappedByteBuffer acts just like a ByteBuffer. Continuing with the previous example, we could decode the buffer with a character decoder and search for a pattern like so:

 CharBuffer cbuff = Charset.forName("US-ASCII").decode( mappedBuff );
 Matcher matcher = Pattern.compile("abc*").matcher( cbuff );
 while ( matcher.find( ) )
 System.out.println( matcher.start( )+": "+matcher.group(0) );


Here, we have implemented something like the Unix grep command in about five lines of code (thanks to the Regular Expression API working with our CharBuffer as a CharSequence). Of course, in this example, the CharBuffer allocated by the decode( ) method is as large as the mapped file and must be held in memory. More generally, we can use the CharsetDecoder shown earlier to iterate through a large mapped space.

Direct transfer

The final feature of FileChannel that we'll look at is performance optimization. FileChannel supports two highly optimized data transfer methods: transferFrom( ) and transferTo( ), which move data between the file channel and another channel. These methods can take advantage of direct buffers internally to move data between the channels as fast as possible, often without copying the bytes into Java's memory space at all. The following example is currently the fastest way to implement a file copy in Java:

 import java.io.*;
 import java.nio.*;
 import java.nio.channels.*;
 public class CopyFile {
 public static void main( String [] args ) throws Exception
 {
 String fromFileName = args[0];
 String toFileName = args[1];
 FileChannel in = new FileInputStream( fromFileName ).getChannel( );
 FileChannel out = new FileOutputStream( toFileName ).getChannel( );
 in.transferTo( 0, (int)in.size( ), out );
 in.close( );
 out.close( );
 }
 }


Scalable I/O with NIO

We've laid the groundwork for using the NIO package in this chapter, but left out some of the important pieces. In the next chapter, we'll see more of the real motivation for java.nio when we talk about nonblocking and selectable I/O. In addition to the performance optimizations that can be made through direct buffers, these capabilities make possible a design for network servers that uses fewer threads and can scale well to large systems. We'll also look at the other significant Channel types: SocketChannel, ServerSocketChannel, and DatagramChannel.

Comments