OO Design Recommendations for J2EE apps

It's possible to design a J2EE app so badly that, even if it contains beautifully written Java code at an individual object level, it will still be deemed a failure. A J2EE app with an excellent overall design but poor implementation code will be an equally miserable failure. Unfortunately, many developers spend too much time grappling with the J2EE APIs and too little ensuring they adhere to good coding practice. All of Sun's J2EE sample apps seem to reflect this. In my experience, it isn't pedantry to insist on adherence to good OO principles: it brings real benefits.


OO design is more important than any particular implementation technology (such as J2EE, or even Java). Good coding practices and sound OO design underpin good J2EE apps. Bad Java code is bad J2EE code.

Some "coding standards" issues – especially those relating to OO design – are on the borderline between design and implementation: for example, the use of design patterns. The following section covers some issues that I've seen cause problems in large code bases, especially issues that I haven't seen covered elsewhere. This is a huge area, so this section is by no means complete. Some issues are matters of opinion, although I'll try to convince you of my position.


Take every opportunity to learn from the good (and bad) code of others, inside and outside your organization. Useful sources in the public domain include successful open source projects and the code in the core Java libraries. License permitting, it may be possible to decompile interesting parts of commercial products. A professional programmer or architect cares more about learning and discovering the best solution than the buzz of finding their own solution to a particular problem.

Achieving Loose Coupling with Interfaces

The "first principle of reusable object-oriented design" advocated by the classic Gang of Four design patterns tutorial is: "Program to an interface, not an implementation". Fortunately, Java makes it very easy (and natural) to follow this principle.


Program to interfaces, not classes. This decouples interfaces from their implementations. Using loose coupling between objects promotes flexibility. To gaim maximum flexibility, declare instance variables and method parameters to be of the least specific type required.

Using interface-based architecture is particularly important in J2EE apps, because of their scale. Programming to interfaces rather than concrete classes adds a little complexity, but the rewards far outweigh the investment. There is a slight performance penalty for calling an object through an interface, but this is seldom an issue in practice. A few of the many advantages of an interface-based approach include:

Adopting interface-based architecture is also the best way to ensure that a J2EE app is portable, yet is able to leverage vendor-specific optimizations and enhancements.

Interface-based architecture can be effectively combined with the use of reflection for configuration (see below).

Prefer Object Composition to Concrete Inheritance

The second basic principle of object-oriented design emphasized in the GoF tutorial is "Favor object composition over class inheritance". Few developers appreciate this wise advice. Unlike many older languages, such as C++, Java distinguishes at a language level between concrete inheritance (the inheritance of method implementations and member variables from a superclass) and interface inheritance (the implementation of interfaces). Java allows concrete inheritance from only a single superclass, but a Java class may implement any number of interfaces (including, of course, those interfaces implemented by its ancestors in a class hierarchy). While there are rare situations in which multiple concrete inheritance (as permitted in C++) is the best design approach, Java is much better off avoiding the complexity that may arise from permitting these rare legitimate uses. Concrete inheritance is enthusiastically embraced by most developers new to OO, but has many disadvantages. Class hierarchies are rigid. It's impossible to change part of a class's implementation; by contrast, if that part is encapsulated in an interface (using delegation and the Strategy design pattern, which we'll discussed below), this problem can be avoided. Object composition (in which new functionality is obtained by assembling or composing objects) is more flexible than concrete inheritance, and Java interfaces make delegation natural. Object composition allows the behavior of an object to be altered at run time, through delegating part of its behavior to an interface and allowing callers to set the implementation of that interface. The Strategy and State design patterns rely on this approach. To clarify the distinction, let's consider what we want to achieve by inheritance. Abstract inheritance enables polymorphism: the substitutability of objects with the same interface at run time. This delivers much of the value of object-oriented design. Concrete inheritance enables both polymorphism and more convenient implementation. Code can be inherited from a superclass. Thus concrete inheritance is an implementation, rather than purely a design, issue. Concrete inheritance is a valuable feature of any OO language; but it is easy to overuse. Common mistakes with concrete inheritance include:

Interfaces are most valuable when kept simple. The more complex an interface is, the less valuable is modeling it as an interface, as developers will be forced to extend an abstract or concrete implementation to avoid writing excessive amounts of code. This is a case where correct interface granularity is vital; interface hierarchies may be separate from class hierarchies, so that a particular class need only implement the exact interface it needs.


Interface inheritance (that is, the implementation of interfaces, rather than inheritance of functionality from concrete classes) is much more flexible than concrete inheritance.

Does this mean that concrete inheritance is a bad thing? Absolutely not; concrete inheritance is a powerful way of achieving code reuse in OO languages. However, it's best considered an implementation approach, rather than a high-level design approach. It's something we should choose to use, rather than be forced to use by an app's overall design.

The Template Method Design Pattern

One good use of concrete inheritance is to implement the Template Method design pattern. The Template Method design pattern (GoF) addresses a common problem: we know the steps of an algorithm and the order in which they should be performed, but don't know how to perform all of the steps. This Template Method pattern solution is to encapsulate the individual steps we don't know how to perform as abstract methods, and provide an abstract superclass that invokes them in the correct order. Concrete subclasses of this abstract superclass implement the abstract methods that perform the individual steps. The key concept is that it is the abstract base class that controls the workflow. Public superclass methods are usually final: the abstract methods deferred to subclasses are protected. This helps to reduce the likelihood of bugs: all subclasses are required to do, is fulfill a clear contract. The centralization of workflow logic into the abstract superclass is an example of inversion of control. Unlike in traditional class libraries, where user code invokes library code, in this approach framework code in the superclass invokes user code. It's also known as the Hollywood principle: "Don't call me, I'll call you". Inversion of control is fundamental to frameworks, which tend to use the Template Method pattern heavily (we'll discuss frameworks later). For example, consider a simple order processing system. The business involves calculating the purchase price, based on the price of individual items, checking whether the customer is allowed to spend this amount, and applying any discount if necessary. Some persistent storage such as an RDBMS must be updated to reflect a successful purchase, and queried to obtain price information. However, it's desirable to separate this from the steps of the business logic. The AbstractOrderEJB superclass implements the business logic, which includes checking that the customer isn't trying to exceed their spending limit, and applying a discount to large orders. The public placeOrder() method is final, so that this workflow can't be modified (or corrupted) by subclasses:

 public final Invoice placeOrder (int customerId, InvoiceItem[] items)
 throws NoSuchCustomerException, SpendingLimitViolation {
 int total = 0;
 for (int i = 0; i < items. length; i++) {
 total += getItemPrice (items [i]) * items [i] .getQuantity(); 
 if (total > getSpendingLimit (customerId) ){ 
 getSessionContext() .setRollbackOnly();
 throw new SpendingLimitViolation (total, limit);
 else if (total > DISCOUNT_THRESHOLD) {
 // Apply discount to total...
 int invoiceId = placeOrder (customerId, total, items); 
 return new InvoiceImpl (iid, total);

I've highlighted the three lines of code in this method that invoke protected abstract "template methods" that must be implemented by subclasses. These will be defined in AbstractOrderEJB as follows:

 protected abstract int getItemPrice(InvoiceItem item);
 protected abstract int getSpendingLimit(customerId)
 throws NoSuchCustomerException;
 protected abstract int placeOrder(int customerId, int total,
 InvoiceItem[] items);

Subclasses of AbstractOrderEJB merely need to implement these three methods. They don't need to concern themselves with business logic. For example, one subclass might implement these three methods using JDBC, while another might implement them using SQLJ or JDO. Such uses of the Template Method pattern offer good separation of concerns. Here, the superclass concentrates on business logic; the subclasses concentrate on implementing primitive operations (for example, using a low-level API such as JDBC). As the template methods are protected, rather than public, callers are spared the details of the class's implementation. As it's usually better to define types in interfaces rather than classes, the Template Method pattern is often used as a strategy to implement an interface.


Abstract superclasses are also often used to implement some, but not all, methods of an interface. The remaining methods – which vary between concrete implementations – are left unimplemented. This differs from the Template Method pattern in that the abstract superclass doesn't handle workflow.


Use the Template Method design pattern to capture an algorithm in an abstract superclass, but defer the implementation of individual steps to subclasses. This has the potential to head off bugs, by getting tricky operations right once and simplifying user code. When implementing the Template Method pattern, the abstract superclass must factor out those methods that may change between subclasses and ensure that the method signatures enable sufficient flexibility in implementation.

Always make the abstract parent class implement an interface. The Template Method design pattern is especially valuable in framework design (discussed towards the end of this chapter).

The Template Method design pattern can be very useful in J2EE apps to help us to achieve as much portability as possible between app servers and databases while still leveraging proprietary features. We've seen how we can sometimes separate business logic from database operations above. We could equally use this pattern to enable efficient support for specific databases. For example, we could have an OracleOrderEJB and a DB2OrderEJB that implemented the abstract template methods efficiently in the respective databases, while business logic remains free of proprietary code.

The Strategy Design Pattern

An alternative to the Template Method is the Strategy design pattern, which factors the variant behavior into an interface. Thus, the class that knows the algorithm is not an abstract base class, but a concrete class that uses a helper that implements an interface defining the individual steps. The Strategy design pattern takes a little more work to implement than the Template Method pattern, but it is more flexible. The advantage of the Strategy pattern is that it need not involve concrete inheritance. The class that implements the individual steps is not forced to inherit from an abstract template superclass. Let's look at how we could use the Strategy design pattern in the above example. The first step is to move the template methods into an interface, which will look like this:

 public interface DataHelper {
 int getItemPrice (InvoiceItem item);
 int getSpendingLimit (customerId) throws NoSuchCustomerException;
 int placeOrder (int customerId, int total, InvoiceItem[] items);

Implementations of this interface don't need to subclass any particular class; we have the maximum possible freedom. Now we can write a concrete OrderEJB class that depends on an instance variable of this interface. We must also provide a means of setting this helper, either in the constructor or through a bean property. In the present example I've opted for a bean property:

 private DataHelper dataHelper;
 public void setDataHelper (DataHelper newDataHelper) {
 this.dataHelper = newDataHelper;

The implementation of the placeOrder() method is almost identical to the version using the Template Method pattern, except that it invokes the operations it doesn't know how to do on the instance of the helper interface, in the highlighted lines:

 public final Invoice placeOrder (int customerId, InvoiceItem[] items)
 throws NoSuchCustomerException, SpendingLimitViolation {
 int total = 0;
 for (int i = 0; i < items.length; i++) {

 total += this.dataHelper.getItemPrice(items[i]) * 
 if (total > this.dataHelper.getSpendingLimit(customerId)) { 
 getSessionContext() .setRollbackOnly();
 throw new SpendingLimitViolation(total, limit);
 } else if (total > DISCOUNT_THRESHOLD) {
 // Apply discount to total...
 int invoiceId = this.dataHelper.placeOrder (customerId, total, items); 
 return new InvoiceImpl (iid, total);

This is slightly more complex to implement than the version using concrete inheritance with the Template Method pattern, but is more flexible. This is a classic example of the tradeoff between concrete inheritance and delegation to an interface. I use the Strategy pattern in preference to the Template Method pattern under the following circumstances:

Using Callbacks to Achieve Extensibility

Let's now consider another use of "inversion of control" to parameterize a single operation, while moving control and error handling into a framework. Strictly speaking, this is a special case of the Strategy design pattern: it appears different because the interfaces involved are so simple. This pattern is based around the use of one or more callback methods that are invoked by a method that performs a workflow. I find this pattern useful when working with low-level APIs such as JDBC. The following example is a stripped down form of a JDBC utility class, JdbcTemplate, used in the sample app, and discussed further in . JdbcTemplate implements a query() method that takes as parameters a SQL query string and an implementation of a callback interface that will be invoked for each row of the result set the query generates. The callback interface is as follows:

 public interface RowCallbackHandler {
 void processRow(ResultSet rs) throws SQLException;

The JdbcTemplate.query() method conceals from calling code the details of getting a JDBC connection, creating and using a statement, and correctly freeing resources, even in the event of errors, as follows:

 public void query(String sql, RowCallbackHandler callbackHandler)
 throws JdbcSqlException {
 Connection con = null;
 PreparedStatement ps = null;
 ResultSet rs = null;
 try {
 con = <code to get connection>
 ps = con.prepareStatement (sql);
 rs = ps.executeQuery();

 while ( {

 } catch (SQLException ex) {
 throw new JdbcSqlException("Couldn't run query [" + sql + "]", ex);
 finally {
 DataSourceUtils.closeConnectionIfNecessary(this.dataSource, con);

The DataSourceUtils class contains a helper method that can be used to close connections, catching and logging any SQLExceptions encountered. In this example, JdbcSqlException extends java.lang.RuntimeException, which means that calling code may choose to catch it, but is not forced to. This makes sense in the present situation. If, for example, a callback handler tries to obtain the value of a column that doesn't exist in the ResultSet, it will do calling code no good to catch it. This is clearly a coding error, and JdbcTemplate's behavior of logging the exception and throwing a runtime exception is logical (see discussion on Error Handling – Checked or Unchecked Exceptions later). In this case, I modeled the RowCallbackHandler interface as an inner interface of the JdbcTemplate class. This interface is only relevant to the JdbcTemplate class, so this is logical. Note that implementations of the RowCallbackHandler interface might be inner classes (in trivial cases, anonymous inner classes are appropriate), or they might be standard, reusable classes, or subclasses of standard convenience classes. Consider the following implementation of the RowCallbackHandler interface to perform a JDBC query. Note that the implementation isn't forced to catch SQLExceptions that may be thrown in extracting column values from the result set:

 class StringHandler implements JdbcTemplate.RowCallbackHandler {
 private List 1 = new LinkedList();
 public void processRow(ResultSet rs)throws SQLException {
 public String[] getStrings() {
 return (String[]) 1.toArray(new String[1.size()]);

This class can be used as follows:

 StringHandler sh = new StringHandler();
 jdbcTemplate.query("SELECT FORENAME FROM CUSTMR", sh);
 String[] forenames = sh.getStrings();

These three lines show how the code that uses the JdbcTemplate is able to focus on the business problem, without concerning itself with the JDBC API. Any SQLExceptions thrown will be handled by JdbcTemplate. This pattern shouldn't be overused, but can be very useful. The following advantages and disadvantages indicate the tradeoffs involved: Advantages:


This pattern is most valuable when the callback interface is very simple. In the example, because the RowCallbackHandler interface contains a single method, it is very easy to implement, meaning that implementation choices such as anonymous inner classes may be used to simplify calling code.

The Observer Design Pattern

Like the use of interfaces, the Observer design pattern can be used to decouple components and enable extensibility without modification (observing the Open Closed Principle). It also contributes to achieving separation of concerns.
Consider, for example, an object that handles user login. There might be several outcomes from a user's attempt to login: successful login; failed login due to an incorrect password; failed login due to an incorrect username and password; system error due to failure to connect to the database that holds login information. Let's imagine that we have a login implementation working in production, but that further requirements mean that the app should e-mail an administrator in the event of a given number of system errors; and should maintain a list of incorrectly entered passwords, along with the correct passwords for the users concerned, to contribute to developing information to help users avoid common errors. We would also like to know the peak periods for user login activity (as opposed to general activity on the web site). All this functionality could be added to the object that implements login. We should have unit tests that would verify that this hasn't broken the existing functionality, but this is approach doesn't offer good separation of concerns (why should the object handling login need to know or obtain the administrator's e-mail address, or know how to send an e-mail?). As more features (or aspects) are added, the implementation of the login workflow itself – the core responsibility of this component – will be obscured under the volume of code to handle them. We can address this problem more elegantly using the Observer design pattern. Observers (or listeners) can be notified of app events. The app must provide (or use a framework that provides) an event uploader. Listeners can register to be notified of events: all workflow code must do is publish events that might be of interest. Event publication is similar to generating log messages, in that it doesn't affect the working of app code. In the above example, events would include:

Events normally include timestamps. Now we could achieve clean separation of concerns by using distinct listeners to e-mail the administrator on system errors; react to a failed login (added it to a list); and gather performance information about login activity. The Observer design pattern is used in the core Java libraries: for example, JavaBeans can publish property change events. In our own apps, we will use the Observer pattern at a higher level. Events of interest are likely to relate to app-level operations, not low-level operations such as setting a bean property. Consider also the need to gather performance information about a web app. We could build sophisticated performance monitoring into the code of the web app framework (for example, any controller servlets), but this would require modification to those classes if we required different performance statistics in future. It's better to publish events such as "request received" and "request fulfilled" (the latter including success or failure status) and leave the implementation of performance monitoring up to listeners that are solely concerned with it. This is an example of how the Observer design pattern can be used to achieve good separation of concerns. This amounts to Aspect-Oriented Programming, which we discuss briefly under Using Reflection later. Don't go overboard with the Observer design pattern: it's only necessary when there's a real likelihood that loosely coupled listeners will need to know about a workflow. If we use the Observer design pattern everywhere our business logic will disappear under a morass of event publication code and performance will be significantly reduced. Only important workflows (such as the login process of our example) should generate events. A warning when using the Observer design pattern: it's vital that listeners return quickly. Rogue listeners can lock an app. Although it is possible for the event publishing system to invoke observers in a different thread, this is wasteful for the majority of listeners that will return quickly. It's a better choice in most situations for the onus to be on listeners to return quickly or spin off long-running tasks into separate threads. Listeners should also avoid synchronization on shared app objects, as this may lead to blocking. Listeners must be threadsafe. The Observer design pattern is less useful in a clustered deployment than in deployment on a single server, as it only allows us to publish events on a single server. For example, it would be unsafe to use the Observer pattern to update a data cache; as such an update would apply only to a single server. However, the Observer pattern can still be very useful in a cluster. For example, the apps discussed above would all be valid in a clustered environment. JMS can be used for cluster-wide event publication, at the price of greater API complexity and a much greater performance overhead. In my experience, the Observer design pattern is more useful in the web tier than in the EJB tier. For example, it's impossible to create threads in the EJB tier (again, JMS is the alternative).

In we look at how to implement the Observer design pattern in an app framework. The app framework infrastructure used in the sample app provides an event publication mechanism, allowing approaches such as those described here to be implemented without the need for an app to implement any "plumbing".

Consider Consolidating Method Parameters

Sometimes it's a good idea to encapsulate multiple parameters to a method into a single object. This may enhance readability and simplify calling code. Consider a method signature like this:

 public void setOptions(Font f, int lineSpacing, int linesPerPage,
 int tabSize);

We could simplify this signature by rolling the multiple parameters into a single object, like this:

 public void setOptions(Options options);

The main advantage is flexibility. We don't need to break signatures to add further parameters: we can add additional properties to the parameter object. This means that we don't have to break code in existing callers that aren't interested in the added parameters. As Java, unlike C++, doesn't offer default parameter values, this can be a good way to enable clients to simplify calls. Let's suppose that all (or most) or the parameters have default values. In C++ we could code the default values in the method signature, enabling callers to omit some of them, like this:

 void SomeClass::setOptions(Font f, int lineSpacing = 1, int linesPerPage = 25,
 int tabSize = 4);

This isn't possible in Java, but we can populate the object with default values, allowing subclasses to use syntax like this:

 Options o = new Options();

Here, the Options object's constructor sets all fields to default values, so we need modify only to those that vary from the default. If necessary, we can even make the parameter object an interface, to allow more flexible implementation. This approach works particularly well with constructors. It's indicated when a class has many constructors, and subclasses may face excessive work just preserving superclass constructor permutations. Instead, subclasses can use a subclass of the superclass constructor's parameter object. The Command design pattern uses this approach: a command is effectively a consolidated set of parameters, which are much easier to work with together than individually. The disadvantage of parameter consolidation is the potential creation of many objects, which increases memory usage and the need for garbage collection. Objects consume heap space; primitives don't. Whether this matters depends on how often the method will be called.


Consolidating method parameters in a single object can occasionally cause performance degradation in J2EE apps if the method call is potentially remote (a call on the remote interface of an EJB), as marshaling and unmarshaling several primitive parameters will always be faster than marshaling and unmarshaling an object. However, this isn't a concern unless the method is invoked particularly often (which might indicate poor app partitioning – we don't want to make frequent remote calls if we can avoid it).

Exception Handling – Checked or Unchecked Exceptions

Java distinguishes between two types of exception. Checked exceptions extend java.lang.Exception, and the compiler insists that they are caught or explicitly rethrown. Unchecked or runtime exceptions extend java.lang.RuntimeException, and need not be caught (although they can be caught and propagate up the call stack in the same way as checked exceptions). Java is the only mainstream language that supports checked exceptions: all C++ and C# exceptions, for example, are equivalent to Java's unchecked exceptions. First, let's consider received wisdom on exception handling in Java. This is expressed in the section on exception handling in the Java Tutorial (, which advises the use of checked exceptions in app code.


Because the Java language does not require methods to catch or specify runtime exceptions, it's tempting for programmers to write code that throws only runtime exceptions or to make all of their exception subclasses inherit from RuntimeException. Both of these coding shortcuts allow programmers to write Java code without bothering with all of the nagging errors from the compiler and without bothering to specify or catch any exceptions. While this may seem convenient to the programmer, it sidesteps the intent of Java's catch or specify requirement and can cause problems for the programmers using your classes Checked exceptions represent useful information about the operation of a legally specified request that the caller may have had no control over and that the caller needs to be informed about – for example, the file system is now full, or the remote end has closed the connection, or the access privileges don't allow this action.

What does it buy you if you throw a RuntimeException or create a subclass of RuntimeException just because you don't want to deal with specifying it? Simply, you get the ability to throw an exception without specifying that you do so. In other words, it is a way to avoid documenting the exceptions that a method can throw. When is this good? Well, when is it ever good to avoid documenting a method's behavior? The answer is "hardly ever".

To summarize Java orthodoxy: checked exceptions should be the norm. Runtime exceptions indicate coding errors. I used to subscribe to this view. However, after writing and working with thousands of catch blocks, I've come to the conclusion that this appealing theory doesn't always work in practice. I'm not alone. Since developing my own ideas on the subject, I've noticed that Bruce Eckel, author of the classic tutorial Thinking about Java, has also changed his mind. Eckel now advocates the use of runtime exceptions as the norm, and wonders whether checked exceptions should be dropped from Java as a failed experiment ( Eckel cites the observation that, when one looks at small amounts of code, checked exceptions seem a brilliant idea and promise to avoid many bugs. However, experience tends to indicate the reverse for large code bases. See "Exceptional Java" by Alan Griffiths at for another discussion of the problems with checked exceptions. Using checked exceptions exclusively leads to several problems:

Many of these problems can be attributed to the problem of code catching exceptions it can't handle, and being forced to rethrow wrapped exceptions. This is cumbersome, error prone (it's easy to lose the stack trace) and serves no useful purpose. In such cases, it's better to use an unchecked exception. This will automatically unwind the call stack, and is the correct behavior for exceptions of the "something went horribly wrong" variety. I take a less heterodox view than Eckel in that I believe there's a place for checked exceptions. Where an exception amounts to an alternative return value from a method, it should definitely be checked, and it's good that the language helps enforce this. However, I feel that the conventional Java approach greatly overemphasizes checked exceptions.


Checked exceptions are much superior to error return codes (as used in many older languages). Sooner or later (probably sooner) someone will fail to check an error return value; it's good to use the compiler to enforce correct error handling. Such checked exceptions are as integral to an object's API as parameters and return values.

However, I don't recommend using checked exceptions unless callers are likely to be able to handle them. In particular, checked exceptions shouldn't be used to indicate that something went horribly wrong, which the caller can't be expected to handle.


Use a checked exception if calling code can do something sensible with the exception. Use an unchecked exception if the exception is fatal, or if callers won't gain by catching it. Remember that a J2EE container (such as a web container) can be relied on to catch unchecked exceptions and log them.

I suggest the following guidelines for choosing between checked and unchecked exceptions:



Recommendation if the answer is yes

Should all callers handle this problem? Is the exception essentially a second return value for the method?

Spending limit exceeded in a processInvoice() method

Define and used a checked exception and take advantage of Java's compile-time support.

Will only a minority of callers want to handle this problem?

JDO exceptions

Extend RuntimeException. This leaves callers the choice of catching the exception, but doesn't force all callers to catch it.

Did something go horribly wrong? Is the problem unrecoverable?

A business method fails because it can't connect to the app database

Extend RuntimeException. We know that callers can't do anything useful besides inform the user of the error.

Still not clear?


Extend RuntimeException. Document the exceptions that may be thrown and let callers decide which, if any, they wish to catch.


Decide at a package level how each package will use checked or unchecked exceptions. Document the decision to use unchecked exceptions, as many developers will not expect it. The only danger in using unchecked exceptions is that the exceptions may be inadequately documented. When using unchecked exceptions, be sure to document all exceptions that may be thrown from each method, allowing calling code to choose to catch even exceptions that you expect will be fatal. Ideally, the compiler should enforce Javdoc-ing of all exceptions, checked and unchecked.

If allocating resources such as JDBC connections that must be released under all circumstances, remember to use a finally block to ensure cleanup, whether or not you need to catch checked exceptions. Remember that a finally block can be used even without a catch block.

One reason sometimes advanced for avoiding runtime exceptions is that an uncaught runtime exception will kill the current thread of execution. This is a valid argument in some situations, but it isn't normally a problem in J2EE apps, as we seldom control threads, but leave this up to the app server. The app server will catch and handle runtime exceptions not caught in app code, rather than let them bubble up to the JVM. An uncaught runtime exception within the EJB container will cause the container to discard the current EJB instance. However, if the error is fatal, this usually makes sense.


Ultimately, whether to use checked or unchecked exception is a matter of opinion. Thus it's not only vital to document the approach taken, but to respect the practice of others. While I prefer to use unchecked exceptions in general, when maintaining or enhancing code written by others who favor exclusive use of checked exceptions, I follow their style.

Good Exception Handling Practices

Whether we used checked or unchecked exceptions, we'll still need to address the issue of "nesting" exceptions. Typically this happens when we're forced to catch a checked exception we can't deal with, but want to rethrow it, respecting the interface of the current method. This means that we must wrap the original, "nested" exception within a new exception. Some standard library exceptions, such as javax.servlet.ServletException, offer such wrapping functionality. But for our own app exceptions, we'll need to define (or use existing) custom exception superclasses that take a "root cause" exception as a constructor argument, expose it to code that requires it, and override the printStackTrace() methods to show the full stack trace, including that of the root cause. Typically we need two such base exceptions, one for checked and on for unchecked exceptions.


This is no longer necessary in Java 1.4, which supports exception nesting for all exceptions. We'll discuss this important enhancement below.

In the generic infrastructure code accompanying our sample app, the respective classes are com.interface21.core.NestedCheckedException and com.interface21.core.NestedRuntimeException. Apart from being derived from java.lang.Exception and java.lang.RuntimeException respectively, these classes are almost identical. Both these exceptions are abstract classes; only subtypes have meaning to an app. The following is a complete listing of NestedRuntimeException:

 package com.interface21.core;
 public abstract class NestedRuntimeException extends RuntimeException { 
 private Throwable rootCause;
 public NestedRuntimeException (String s) {
 public NestedRuntimeException(String s, Throwable ex) {
 super (s);
 rootCause = ex;
 public Throwable getRootCause() {
 return rootCause;

 public String getMessage() {
 if (rootCause == null) {
 return super.getMessage();
 } else {
 return super.getMessage() + "; nested exception is: \n\t" +

 public void printStackTrace (PrintStream ps) {
 if (rootCause == null) {
 } else {
 public void printStackTrace(PrintWriter pw) {
 if (rootCause == null) {
 } else {
 public void printStackTrace() {

Java 1.4 introduces welcome improvements in the area of exception handling. There is no longer any need for writing chainable exceptions, although existing infrastructure classes like those shown above will continue to work without a problem. New constructors are added to java.lang.Throwable and java.lang.Exception to support chaining, and a new method void initCause (Throwable t) is added to java.lang.Throwable to allow a root cause to be specified even after exception construction. This method may be invoked only once, and only if no nested exception is provided in the constructor. Java 1.4-aware exceptions should implement a constructor taking a throwable nested exception and invoking the new Exception constructor. This means that we can always create and throw them in a single line of code as follows:

 catch (RootCauseException ex) {
 throw new MyJava14Exception("Detailed message", ex);

If an exception does not provide such a constructor (for example, because it was written for a pre Java 1.4 environment), we are guaranteed to be able to set a nested exception using a little more code, as follows:

 catch (RootCauseException ex) {
 MyJava13Exception mex = new MyJava13Exception("Detailed message");
 throw mex;

When using nested exception solutions such as NestedRuntimeException, discussed above, follow their own conventions, rather than Java 1.4 conventions, to ensure correct behavior.

Exceptions in J2EE

There are a few special issues to consider in J2EE apps. Distributed apps will encounter many checked exceptions. This is partly because of the conscious decision made at Sun in the early days of Java to make remote calling explicit. Since all RMI calls – including EJB remote interface invocations – throw java.rmi.RemoteException, local-remote transparency is impossible. This decision was probably justified, as local-remote transparency is dangerous, especially to performance. However, it means that we often have to write code to deal with checked exceptions that amount to "something went horribly wrong, and it's probably not worth retrying". It's important to protect interface code – such as that in servlets and JSP pages – from J2EE "system-level" exceptions such as java.rmi.RemoteException. Many developers fail to recognize this issue, with unfortunate consequences, such as creating unnecessary dependency between architectural tiers and preventing any chance of retrying operations that might have been retried had they been caught at a low enough level. Amongst developers who do recognize the problem, I've seen two approaches:

I believe that the second of these approaches is superior. It provides a clean separation of architectural tiers, allows a choice of checked or unchecked exceptions and does not allow the use of EJB and remote invocation to intrude too deeply into app design. We'll discuss this approach in more detail in .

Making Exceptions Informative

It's vital to ensure that exceptions are useful both to code and to humans developing, maintaining and administering an app. Consider the case of exceptions of the same class reflecting different problems, but distinguished only by their message strings. These are unhelpful to Java code catching them. Exception message strings are of limited value: they may be helpful to explain problems when they appear in log files, but they won't enable the calling code to react appropriately, if different reactions are required, and they can't be relied on for display to users. When different problems may require different actions, the corresponding exceptions should be modeled as separate subclasses of a common superclass. Sometimes the superclass should be abstract. Calling code will now be free to catch exceptions at the relevant level of detail. The second problem – display to users – should be handled by including error codes in exceptions. Error codes may be numeric or strings (string codes have the advantage that they can make sense to readers), which can drive runtime lookup of display messages that are held outside the exception. Unless we are able to use a common base class for all exceptions in an app – something that isn't possible if we mix checked and unchecked exceptions – we will need to make our exceptions implement an ErrorCoded or similarly named interface that defines a method such as this:

 String getErrorCode();

The com.interface21.core.ErrorCoded interface from the infrastructure code discussed in includes this single method. With this approach, we are able to distinguish between error messages intended for end users and those intended for developers. Messages inside exceptions (returned by the getMessage() method) should be used for logging, and targeted to developers.


Separate error messages for display to users from exception code, by including an error code with exceptions. When it's time to display the exception, the code can be resolved: for example, from a properties file.

If the exception isn't for a user, but for an administrator, it's less likely that we'll need to worry about formatting messages or internationalization (internationalization might, however, still be an issue in some situations: for example, if we are developing a framework that may be used by non-English speaking developers). As we've already discussed, there's little point in catching an exception and throwing a new exception unless we add value. However, occasionally the need to produce the best possible error message is a good reason for catching and wrapping. For example, the following error message contains little useful information: WebappContext failed to load config Exception messages like this typically indicate developer laziness in writing messages or (worse still) use of a single catch block to catch a wide variety of exceptions (meaning that the code that caught the exception had as little idea what went wrong as the unfortunate reader of the message). It's better to include details about the operation that failed, as well as preserving the stack trace. For example, the following message is an improvement: WebappContext failed to load config: cannot instantiate class Better still is a message that gives precise information about what the process was trying to do when it failed, and information about what might be done to correct the problem: WebappContext failed to load config from file /WEB-INF/appContext.xml': cannot instantiate class ‘’ attempting to load bean element with name ‘too’ – check that this class has a public no arg constructor


Include as much context information as possible with exceptions. If an exception probably results from a coding error, try to include information on how to rectify the problem.

Using Reflection

The Java Reflection API enables Java code to discover information about loaded classes at runtime, and to instantiate and manipulate objects. Many of the coding techniques discussed in this chapter depend on reflection: this section considers some of the pros and cons of reflection.


Many design patterns can best be expressed by use of reflection. For example, there's no need to hard-code class names into a Factory if classes are JavaBeans, and can be instantiated and configured via reflection. Only the names of classes – for example, different implementations of an interface – need be supplied in configuration data.

Java developers seem divided about the use of reflection. This is a pity, as reflection is an important part of the core API, and forms the basis for many technologies, such as JavaBeans, object serialization (crucial to J2EE) and JSP. Many J2EE servers, such as JBoss and Orion, use reflection (via Java 1.3 dynamic proxies) to simplify J2EE deployment by eliminating the need for container-generated stubs and skeletons. This means that every call to an EJB is likely to involve reflection, whether we're aware of it or not. Reflection is a powerful tool for developing generic solutions.


Used appropriately, reflection can enable us to write less code. Code using reflection can also minimize maintenance by keeping itself up to date. As an example, consider the implementation of object serialization in the core Java libraries. Since it uses reflection, there's no need to update serialization and deserialization code when fields are added to or removed from an object. At a small cost to efficiency, this greatly reduces the workload on developers using serialization, and eliminates many coding errors.

Two misconceptions are central to reservations about reflection:

Each of these misconceptions is based on a grain of truth, but amounts to a dangerous oversimplification. Let's look at each in turn. Code that uses reflection is usually slower than code that uses normal Java object creation and method calls. However, this seldom matters in practice, and the gap is narrowing with each generation of JVMs. The performance difference is slight, and the overhead of reflection is usually far outweighed by the time taken by the operations the invoked methods actually do. Most of the best uses of reflection have no performance implications. For example, it's largely immaterial how long it takes to instantiate and configure objects on system startup. As we'll see in , most optimization is unnecessary. Unnecessary optimization that prevents us from choosing superior design choices is downright harmful. Similarly, the overhead added by the use of reflection to populate a JavaBean when handling a web request (the approach taken by Struts and most other web app frameworks) won't be detectable. Disregarding whether or not performance matters in a particular situation, reflection also has far from the disastrous impact on performance that many developers imagine, as we'll see in . In fact, in some cases, such as its use to replace a length chain of if/else statements, reflection will actually improve performance. The Reflection API is relatively difficult to use directly. Exception handling, especially, can be cumbersome. However, similar reservations apply to many important Java APIs, such as JDBC. The solution to avoid using those APIs directly, by using a layer of helper classes at the appropriate level of abstraction, not to avoid the functionality they exist to provide. If we use reflection via an appropriate abstraction layer, using reflection will actually simplify app code.


Used appropriately, reflection won't degrade performance. Using reflection appropriately should actually improve code maintainability. Direct use of reflection should be limited to infrastructure classes, not scattered through app objects.

Reflection Idioms

The following idioms illustrate appropriate use of reflection.

Reflection and Switches

Chains of if/else statements and large switch statements should alarm any developer committed to OO principles. Reflection provides two good ways of avoiding them:

Let's look at the second approach in practice. Consider the following code fragment from an implementation of the java.beans.VetoableChangeListener interface. A PropertyChangeEvent received contains the name of the property in question. The obvious implementation will perform a chain of if/else statements to identify the validation method to invoke within the class (the vetoableChange() method will become huge if all validation rules are included inline):

 public void vetoableChange(PropertyChangeEvent e) throws PropertyVetoException {
 if (e.getPropertyName() .equals ("email")) {
 String email = (String) e.getNewValue();
 validateEmail (email, e);
 } else if (e.getPropertyName() .equals ("age")) {
 int age = ((Integer) e.getNewValue()).intValue();
 validateAge(age, e);
 } else if (e.getPropertyName() .equals ("surname")) {
 String surname = (String) e.getNewValue();
 validateForename(surname, e);
 } else if (e.getPropertyName() .equals("forename")) {
 String forename = (String) e.getNewValue();
 validateForename(forename, e);

At four lines per bean property, adding another 10 bean properties will add 40 lines of code to this method. This if/else chain will need updating every time we add or remove bean properties. Consider the following alternative. The individual validator now extends AbstractVetoableChangeListener, an abstract superclass that provides a final implementation of the vetoableChange() method. The AbstractVetoableChangeListener's constructor examines methods added by subclasses that fit a validation signature:

 void validate<bean property name>(<new value>, PropertyChangeEvent)
 throws PropertyVetoException

The constructor is the most complex piece of code. It looks at all methods declared in the class that fit the validation signature. When it finds a valid validator method, it places it in a hash table, validationMethodHash, keyed by the property name, as indicated by the name of the validator method:

 public AbstractVetoableChangeListener() throws SecurityException {
 Method[] methods = getClass() .getMethods();
 for (int i = 0; i < methods.length; i++) {
 if (methods[i] .getName() .startsWith(VALIDATE_METHOD_PREFIX) &&
 methods[i] .getParameterTypes() .length == 2 &&
 getParameterTypes() [1])) {
 // We've found a potential validator
 Class[] exceptions = methods[i] .getExceptionTypes();
 // We don't care about the return type, but we must ensure that
 // the method throws only one checked exception, PropertyVetoException
 if (exceptions.length == 1 &&
 PropertyVetoException.class.isAssignableFrom(exceptions[0])) {
 // We have a valid validator method
 // Ensure it's accessible (for example, it might be a method on an
 // inner class)
 String propertyName = Introspector.decapitalize(methods[i].getName().
 validationMethodHash.put(propertyName, methods[i]);
 System.out.println(methods[i] + " is validator for property " +

The implementation of vetoableChange() does a hash table lookup for the relevant validator method for each property changed, and invokes it if one is found:

 public final void vetoableChange(PropertyChangeEvent e)
 throws PropertyVetoException {
 Method m = (Method) validationMethodHash.get(e.getPropertyName());
 if (m != null) {
 try {
 Object val = e.getNewValue();
 m.invoke(this, new Object[] { val, e });
 } catch (IllegalAccessException ex) {
 System.out.println("WARNING: can't validate. " +
 "Validation method "' + m + "' isn't accessible");
 } catch (InvocationTargetException ex) {
 // We don't need to catch runtime exceptions
 if (ex.getTargetException() instanceof RuntimeException)
 throw (RuntimeException) ex.getTargetException();
 // Must be a PropertyVetoException if it's a checked exception
 PropertyVetoException pex = (PropertyVetoException)
 throw pex;

For a complete listing of this class, or to use it in practice, see the com.interface21.bean.AbstractVetoableChangeListener class under the /framework/src directory of the download accompanying this tutorial. Now subclasses merely need to implement validation methods with the same signature as in the first example. The difference is that a subclass's logic will automatically be updated when a validation method is added or removed. Note also that we've used reflection to automatically convert parameter types to validation methods. Clearly it's a coding error if, say, the validateAge() method expects a String rather than an int. This will be indicated in a stack trace at runtime. Obvious bugs pose little danger. Most serious problems result from subtle bugs, that don't occur every time the app runs, and don't result in clear stack traces. Interestingly, the reflective approach will actually be faster on average than the if/else approach if there are many bean properties. String comparisons are slow, whereas the reflective approach uses a single hash table lookup to find the validation method to call. Certainly, the AbstractVetoableChangeListener class is more conceptually complex than the if/else block. However, this is framework code. It will be debugged once, and verified by a comprehensive set of test cases. What's important is that the app code – individual validator classes – is much simpler because of the use of reflection. Furthermore, the AbstractVetoableChangeListener class is still easy to read for anyone with a sound grasp of Java reflection. The whole of the version of this class I use – including full Javadoc and implementation comments and logging statements – amounts to a modest 136 lines.


Reflection is a core feature of Java, and any serious J2EE developer should have a strong grasp of the Reflection API. Although reflective idioms (such as, the ternary operator) may seem puzzling at first, they're equally a part of the language's design, and it's vital to be able to read and understand them easily.

Reflection and the Factory Design Pattern

I seldom use the Factory design pattern in its simplest form, which requires all classes created by the factory to be known to the implementation of the factory. This severely limits extensibility: the factory object cannot create objects (even objects that implement a known interface) unless it knows their concrete class. The following method (a simplified version of the "bean factory" approach discussed in ) shows a more flexible approach, which is extensible without any code changes. It's based on using reflection to instantiate classes by name. The class names can come from any configuration source:

 public Object getObject(String classname, Class requiredType)
 throws FactoryException {
 try {
 Class clazz = Class.forName(classname);
 Object o = clazz.newInstance();
 if (! requiredType.isAssignableFrom(clazz))
 throw new FactoryException("Class "' + classname +
 "' not of required type " + requiredType);
 // Configure the object...
 return o;
 } catch (ClassNotFoundException ex) {
 throw new FactoryException("Couldn't load class "' + classname + ""', ex);
 } catch (IllegalAccessException ex) {
 throw new FactoryException("Couldn't construct class "' + classname + "': is the no arg constructor public?", ex);
 } catch (InstantiationException ex) {
 throw new FactoryException("Couldn't construct class "' + classname +
 "': does it have a no arg constructor", ex);

This method can be invoked like this:

 MyInterface mo = (MyInterface)

Like the other reflection example, this approach conceals complexity in a framework class. It is true that this code cannot be guaranteed to work: the class name may be erroneous, or the class may not have a no arg constructor, preventing it being instantiated. However, such failures will be readily apparent at runtime, especially as the getObject() method produces good error messages (when using reflection to implement low-level operations, be very careful to generate helpful error messages). Deferring operations till runtime does involve trade-offs (such as the need to cast), but the benefits may be substantial.


Such use of reflection can best be combined with the use of javaBeans. If the objects to be instantiated expose JavaBean properties, it's easy to hold initialization information outside Java code.

This is a very powerful idiom. Performance is unaffected, as it is usually used only at app startup; the difference between loading and initializing, say, ten objects by reflection and creating the same objects using the new operator and initializing them directly is undetectable. On the other hand, the benefit in terms of truly flexible design may be enormous. Once we do have the objects, we invoke them without further use of reflection. There is a particularly strong synergy between using reflection to load classes by name and set their properties outside Java code and the J2EE philosophy of declarative configuration. For example, servlets, filters and web app listeners are instantiated from fully qualified class names specified in the web.xml deployment descriptor. Although they are not bean properties, ServletConfig initialization parameters are set in XML fragments in the same deployment descriptor, allowing the behavior of servlets at runtime to be altered without the need to modify their code.


Using reflection is one of the best ways to parameterize Java code. Using reflection to choose instantiate and configure objects dynamically allows us to exploit the full power of loose coupling using interfaces. Such use of reflection is consistent with the J2EE philosophy of declarative configuration.

Java 1.3 Dynamic Proxies

Java 1.3 introduced dynamic proxies: special classes that can implement interfaces at runtime without declaring that they implement them at compile time. Dynamic proxies can't be used to proxy for a class (rather than an interface). However, this isn't a problem if we use interface-based design. Dynamic proxies are used internally by many app servers, typically to avoid the need to generate and compile stubs and skeletons. Dynamic proxies are usually used to intercept calls to a delegate that actually implements the interface in question. Such interception can be useful to handle the acquisition and release of resources, add additional logging, and gather performance information (especially about remote calls in a distributed J2EE app). There will, of course, be some performance overhead, but its impact will vary depending on what the delegate actually does. One good use of dynamic proxies is to abstract the complexity of invoking EJBs. We'll see an example of this in . The com.interface21.beans.DynamicProxy class included in the infrastructure code with the sample app is a generic dynamic proxy that fronts a real implementation of the interface in question, designed to be subclassed by dynamic proxies that add custom behavior. Dynamic proxies can be used to implement Aspect Oriented Programming (AOP) concepts in standard Java. AOP is an emerging paradigm that is based on crosscutting aspects of a system, based on separation of concerns. For example, the addition of logging capabilities just mentioned is a crosscut that addresses the logging concern in a central place. It remains to be seen whether AOP will generate anything like the interest of OOP, but it's possible that it will at least grow to complement OOP. For more information on AOP, see the following sites:


See the reflection guide with your JDK for detailed information about dynamic proxies.


A warning: I feel dangerously good after I've made a clever use of reflection. Excessive cleverness reduces maintainability. Although I'm a firm believer that reflection, used appropriately, is beneficial, don't use reflection if a simpler approach might work equally well.

Using JavaBeans to Achieve Flexibility

Where possible, app objects – except very fine-grained objects – should be JavaBeans. This maximizes configuration flexibility (as we've seen above), as beans allow easy property discovery and manipulation at runtime. There's little downside to using JavaBeans, as there's no need to implement a special interface to make an object a bean. When using beans, consider whether the following standard beans machinery can be used to implement functionality:


Designing objects to be JavaBeans has many benefits. Most importantly, it enables objects to be instantiated and configured easily using configuration data outside Java code.


Thanks to Gary Watson, my colleague at, for convincing me of the many merits of JavaBeans.

Avoid a Proliferation of Singletons by Using an app Registry

The Singleton design pattern is widely useful, but the obvious implementation can be dangerous. The obvious way to implement a singleton is Java is to use a static instance variable containing the singleton instance, a public static method to return the singleton instance, and provide a private constructor to prevent instantiation:

 public class MySingleton {
 /** Singleton instance */
 private static MySingleton instance;
 // Static block to instantiate the singleton in a threadsafe way
 static {
 instance = new MySingleton();
 } // static initializer
 /** Enforces singleton method. Returns the instance of this object.
 * @throws DataImportationException if there was an internal error
 * creating the singleton
 * @return the singleton instance of this class
 public static MySingleton getInstance() {
 return instance;
 /** Private constructor to enforce singleton design pattern.
 */ private MySingleton() {
 // Business methods on instance

Note the use of a static initializer to initialize the singleton instance when the class is loaded. This prevents race conditions possible if the singleton is instantiated in the getInstance() method if it's null (a common cause of errors). It's also possible for the static initializer to catch any exceptions thrown by the singleton's constructor, which can be rethrown in the getInstance() method. However, this common idiom leads to several problems:

A slightly more sophisticated approach is to use a factory, which may use different implementation classes for the singleton. However, this only solves some of these problems.


I don't much like static variables in general. They break OO by introducing dependency on a specific class. The usual implementation of the Singleton design pattern exhibits this problem.

In my view, it's a much better solution to have one object that can be used to locate other objects. I call this an app context object, although I've also seen it termed a "registry" or "app toolbox". Any object in the app needs only to get a reference to the single instance of the context object to retrieve the single instances of any app object. Objects are normally retrieved by name. This context object doesn't even need to be a singleton. For example, it's possible to use the Servlet API to place the context in a web app's ServletContext, or we can bind the context object in JNDI and access it using standard app server functionality. Such approaches don't require code changes to the context object itself, just a little bootstrap code. The context object itself will be generic framework code, reusable between multiple apps. The advantages of this approach include:

The following code fragments illustrate the use of this approach. The context object itself will be responsible for loading configuration. The context object may register itself (for example with the ServletContext of a web app, or JNDI) or a separate bootstrap class may handle this. Objects needing to use "singletons" must look up the context object in. For example:

 appContext app = (appContext )

The appContext instance can be used to obtain any "singleton":

 MySingleton mySingleton = (MySingleton )

In we'll look at how to implement this superior alternative to the Singleton design pattern. Note that it isn't limited to managing "singletons": this is valuable piece of infrastructure that can be used in many ways.


Why not use JNDI – a standard J2EE service – instead of use additional infrastructure to achieve this result? Each "singleton" could be bound to the JNDI context, allowing other components running in the app server to look them up.

Using JNDI adds complexity (JNDI lookups are verbose) and is significantly less powerful than the app context mechanism described above. For example, each "singleton" would be left on its own to handle its configuration, as JNDI offers only a lookup mechanism, not a means of externalizing configuration. Another serious objection is that this approach would be wholly dependent on app server services, making testing outside an app server unnecessarily difficult. Finally, some kind of bootstrap service would be required to bind the objects into JNDI, meaning that we'd probably need to implement most of the code in the app context approach anyway. Using an app context, we can choose to bind individual objects with JNDI if it proves useful.


Avoid a proliferation of singletons, each with a static getInstance() method. Using a factory to return each singleton is better, but still inflexible. Instead, use a single "app context" object or registry that returns a single instance of each class. The generic app context implementation will normally (but not necessarily) be based on the use of reflection, and should take care of configuring the object instances it manages. This has the advantage that app objects need only expose bean properties for configuration, and never need to look up configuration sources such as properties files.


Refactoring, according to Martin Fowler in Refactoring: Improving the Design of Existing Code from Oracle (), is "the process of changing a software system in such a way that it does not alter the external behavior of the code, yet improves its internal structure. It's a disciplined way to clean up code that minimizes the chances of introducing bugs". See for more information and resources on refactoring. Most of the refactoring techniques Fowler describes are second nature to good developers. However, the discussion is useful and Fowler's naming is being widely adopted (For example, the Eclipse IDE uses these names on menus).


Be prepared to refactor to eliminate code duplication and ensure that a system is well implemented at each point in time.

It's helpful to use an IDE that supports refactoring. Eclipse is particularly good in this respect. I believe that refactoring can be extended beyond functional code. For example, we should continually seek to improve in the following areas: