JaVa
   

Tools for Testing Performance and Throughput

To test app performance and throughput, at whatever stage in the development process, we'll need to use appropriate tools and have a clear test plan. It's vital that tests are repeatable and that test results can be filed for future reference. Usually we'll begin with load testing, which will also indicate performance in the form of response times for each concurrent test. However, we may also need to profile individual requests, so as to be able to optimize or eliminate slow operations.

Preparing to Benchmark

Benchmarking is a form of experimentation, so it's vital to adopt a good experimental method to ensure that benchmarks are as accurate as possible and are repeatable. For example, the app must be configured as it will be in production.

It's also vital to ensure that there are no confounding factors that may affect the running of the tests:

Important 

Remember that benchmarking isn't a precise science. Strive to eliminate confounding factors, but remember not to read too much into a particular number. In my experience, it's common to see variations of 20-30% between successive test runs on J2EE apps, especially where load testing is involved, because of the number of variables, which will also apply in production.

Web Test Tools

One of the easiest ways is to establish whether a J2EE web app performs satisfactorily and delivers sufficient throughput under load is to load-test its web interface. Since the web interface will be the user's experience of the app, non-functional requirements should provide a clear definition of the performance and concurrency required.

Microsoft Web app Stress Tool

There are many tools for testing the performance of web apps. My preferred tool is Microsoft's free Web app Stress (WAS) Tool (http://webtool.rte.microsoft.com/). For a platform-neutral, Java-based alternative, consider Apache JMeter (available at http://jakarta.apache.org/jmeter/index.html) or the Grinder (discussed below). However, these tools are less intuitive and harder to set up. Since it's generally best to run load testing software on a separate machine to the app server, there is usually no problem in finding a Windows machine to run the Microsoft tool. Configuring Microsoft's WAS is very easy. It simply involves creating one or more scripts. Scripts can also be "recorded" using Internet Explorer. Scripts consist of one or more definitions of app URLs to be load-tested, including GET or POST data if necessary. WAS can use a range or set of parameter values. The following screenshot illustrates configuring WAS to request the "Display Show" page in the sample app. Remember to change the port from the default of 80 if necessary for each URL. In this example, it is the JBoss/Jetty default of 8080:

Java Click To expand

Each script has global settings for the number of concurrent threads to use, the delays between requests issued by each thread, and options such as whether to follow redirects and whether to simulate user access via a slow modem link. It's also possible to configure cookie and session behavior. Each script is configured via the following screen:

Java Click To expand

Once a script has been run, reports can be viewed via the Reports option on the View menu. Reports are stored in a database so that they can be viewed at any time. Reports include the number of requests per second, the amount of data transmitted and received, and the average wait to receive the first and last byte of the response:

Java Click To expand

Using the Web app Stress Tool or any comparable product, we can quickly establish the performance and throughput of a whole web app, indicating where further, more detailed analysis may be required and whether performance tuning is required at all.

Non-Web Testing Tools

Sometimes testing through the web interface is all that's required. Performance and load testing isn't like unit testing; there's no need to have performance tests for every class. If we can easily set up a performance test of an entire system and are satisfied with the results, there's no need to spend further time writing performance or scalability tests. However, not all J2EE apps have a web interface (and even in web apps, we may need a more detailed breakdown of the architectural layers where an app spends most of its time). This means that we need the ability to load-test and performance-test individual Java classes, which in turn may test app resources such as databases. There are many open source tools available for such testing, such as the Grinder (http://sf.net/projects/grinder), an extensible load tester first developed for a Wrox tutorial on WebLogic, and Apache JMeter. Personally I find most of these tools unnecessarily complex. For example, it's not easy to write test cases for the Grinder, which also requires multicast to be enabled to support communication between load-test processes and its console. Unlike JUnit, there is a learning curve involved. I use the following simple framework, which I originally developed for a client a couple of years ago and which I've found to meet nearly all requirements with a minimum of effort in writing load tests. The code is included with the sample app download, under the /framework/test directory. Unlike JMeter or the Grinder, it doesn't provide a GUI console. I did write one for an early version of the tool, but found it was less useful than file reports, especially as the tests were often run on a server without a display. Like most load-testing tools, the test framework in the com.interface21.load package, described below, is based on the following concepts:

Periodic reports are made to the console and a report can be written to file after the completion of a test run. The following UML class diagram illustrates the framework classes involved, and how an app-specific test thread class (circled) can extend the AbstractTest convenience class. The framework supplies a test suite implementation, which provides a standard way to coordinate all the app-specific tests:

Java Click To expand

The only code required to implement a load test is an extension of the framework AbstractTest class, as shown below. This involves implementing just two methods, as the AbstractTest class provides a final implementation of the java.lang.Runnable interface:

 import com.interface21.load.AbstractTest;
 public class MyTestThread extends AbstractTest {
 private MyFixture fixture;


The framework calls the following method on subclasses of AbstractTest to make the shared test fixture-the app object to test - available to each thread. Tests that don't require a fixture don't need to override this method:

 public void setFixture (Object fixture) {
 this.fixture = (MyFixture) fixture;
 }


The following abstract method must be implemented to run each test. The index of the test pass is passed as an argument in case it is necessary:

 protected void runPass (int i) throws Exception {
 // do something with fixture
 }
 }


Typically the runPass() method will be implemented to select random test data made available by the fixture and use it to invoke one or more methods on the class being load tested. As with JUnit test cases, we need only catch exceptions resulting from normal execution scenarios: uncaught exceptions will be logged as errors by the test suite and included in the final report (the test thread will continue to run further tests). Exceptions can also be thrown to indicate failures if assertions are not satisfied. This tool uses the bean-based approach to configuration described in and used consistently in the app framework discussed in this tutorial. Each test uses its own properties file, which enables easy parameterization. This file is read by the PropertiesTestSuiteLoader class, which takes the filename as an argument and creates and initializes a test suite object of type BeanFactoryTestSuite from the bean definitions contained in the properties file. The following definitions configure the test suite, including its reporting format, how often it reports to the console during test runs, and where it writes its report files. If the reportFile bean property isn't set, there's no file output:

 suite.class=com.interface21.load.BeanFactoryTestSuite
 suite.name=Availability check
 suite.reportIntervalSeconds=10
 suite.longReports=false
 suite.doubleFormat=###.#
 suite.reportFile=c:\\reports\\results1.txt


The following keys control how many threads are run, how many passes or test cases each thread runs, and how long (in milliseconds) is the maximum delay between test cases in each test thread:

 suite.threads=50
 suite.passes=40
 suite.maxPause=23


The following properties show how an app-specific test fixture can be made available to the test suite, and configured via its JavaBean properties. The test suite will invoke the setFixture() method on each test thread to enable all test thread instances to share this fixture:

 suite.fixture (ref) = fixture
 fixture.class=com.interface21.load.AvailabilityFixture
 fixture.timeout=10
 fixture.minDelay=60
 fixture.maxDelay=120


Finally, we must include bean definitions for one or more test threads. Each of these will be independent at run time. Hence this bean definition must not be a singleton, so we override the bean factory's default behavior, in the highlighted line:

 availabilityTest.class=com.interface21.load.AvailabilityCheckTest
 availabilityTest. (singleton) = false 



The default behavior is for each thread to take its number of passes and maximum pause value from that of the test suite, although this can be overridden for each test thread. It's also possible to run several different test threads concurrently, each with a different weighting. The test suite can be run using an Ant target like this:

 <target >
 <java
 classname="com.interface21.load.PropertiesTestSuiteLoader"
 fork="yes"
 dir="src"
 <classpath location="classpath"/> 
 <arg file="path/mytest.properties"/>
 </java>
</target>



The highlighted line should be changed as necessary to ensure that both the com.interface21.load package and the app-specific test fixture and test threads are available on the classpath. Reports will show the number of test runs completed, the number of errors, the number of hits per second achieved by each test thread and overall, and the average response time:

 AvailabilityCheckTest-0 40/40 errs=0 125hps avg=8ms
 AvailabilityCheckTest-1 40/40 errs=0 95hps avg=10ms
 AvailabilityCheckTest-2 40/40 errs=0 90.7hps avg=11ms
 AvailabilityCheckTest-3 40/40 errs=0 99.8hps avg=10ms
 AvailabilityCheckTest-4 40/40 errs=0 110hps avg=9ms
 *********** Total hits=200
 *********** HPS=521.3
 *********** Average response=9


The most important setting is the number of test threads. By increasing this, we can establish at what point throughput begins to deteriorate, which is usually the point of the exercise. Modern JVMs can cope with very high numbers of concurrent threads; I've successfully tested with several hundred concurrent threads. However, it's important to remember that if we run too many concurrent test threads, the work of switching execution between the test threads may become great enough to distort the results. It's also possible to use this tool to establish how the app copes with long periods of prolonged load, by specifying a very high number of passes to be executed by each thread.

This tool can be used for web testing as well, by providing an AbstractTest implementation that requests web resources. However, the Web app Stress Tool is easier to use and provides all the flexibility required in most cases.

JaVa
   
Comments