Testing

The technical team should be able to construct, set up, and perform appropriate testing for the project. The aim of this section is not to create a checklist for the technical team, but rather to describe how the business users can assist in creating test cases and performing some of the testing.

Use Cases

An excellent technique for testing apps is the awkwardly named "use cases." A use case is, very simply, a short scenario for real-world functionality. For instance, Table 9.1 is a use case for identifying a customer who is calling for assistance from a help desk system.

Table 9.1. Use Case Example for a Customer Assistance Call

Use Case

Perform customer search

Actors

Help Desk rep

Purpose

Find a customer based on information provided to the support rep.

Overview

From the Main screen: Enter the search information in the customer area. Press "Search" button. Is all the information visible (name, building, etc.)? Repeat test after adding a new customer to the database. Is the new customer visible? Repeat test after removing a customer from the database. Is the customer no longer visible (deactivated)? Repeat test after changing a customer record in the customer database. Is the change reflected?

Cross References

Search for a customer case.

The purpose of use cases is to define the required functionality of the system before it is implemented through tasks that need to be performed with specific outcomes. There should be a use case for each piece of functionality required, so you are looking at dozens of use cases for fairly simple systems and hundreds for larger systems. They should capture all the different tasks that end-users perform with the system. Going back to the idea of user roles discussed in the section about the kickoff meeting, each role will have a number of use cases specific to that role (and, to be sure, some use cases will be shared across roles). Use cases are, of course, used for testing, but they are also very useful to the developers to confirm the exact functionality required. Defining use cases ahead of time makes it easier to resist scope creep and also to get user buy-in since testing occurs against the very tests that the users designed, not some mysterious technical requirements. Use cases should be created as a part of the project specifications document, and they must be created and approved by the user community, not by the technical team. They are rarely created during the kickoff workshop since they are rather too detailed for it, and they cannot be finalized until the high-level technical design is completed anyway. They are usually the very first action item for the business users immediately following the kickoff workshop. During the kickoff workshop you should be able to list the titles of the use cases (since they match the various business processes that are discussed there) and for simple projects you may get all the way to a first draft at least.

Functionality Testing

User testing should occur at each milestone, using the use cases that are appropriate for that portion of the tool being delivered at the milestone. Super-users should conduct the testing against the use cases, although other end-users may be recruited as well. The technical team should run through the use cases before the delivery, but the technical staffers, not knowing the business purpose behind the cases (and, to be sure, knowing entirely too much about the tool) often miss subtle and not so subtle issues with the use cases, issues that are spotted within seconds by the business users. It's very useful to get multiple individuals to perform the tests. Untrained testers have inconsistent approaches to testing and may miss crucial misbehaviors, which is also true of trained testers but to a much smaller degree. Whenever possible, hold the testing in a central lab with some developers in attendance to observe the testers, answer questions as needed, take notes on testing outcomes, and even, whenever possible, to make immediate changes based on the feedback, allowing an immediate opportunity to retest. Testers should not be required to file copious reports, just demonstrate the problem behaviors to the developers. Make it easy for them! Although testing should concentrate on whether or not the objective tests are successful, much interesting information about usability also emerges during the testing sessions. Therefore, the developers in attendance should observe the users and note any hesitation or confusion. Are users confused about what screen to use? Are they overwhelmed by a particular screen? This is where having testers who are not the super-users is helpful, since super-users know the tool "too well" to be objective by that point. Usability testing is best done by simply observing the users performing their tasks rather than interviewing them afterwards about the experience. Users have their pride, so they may not admit that something was difficult. Or they may forget! In the same vein, an unobtrusive observer does better than videotaping or recording users, which makes them nervous and distorts the results. Stick to informal observations and invest the recording money somewhere else.

Load Testing

Load testing is mostly a technical issue, however from the point of view of business users it's critical that the system be tested under adequate loads (numbers) of users and transactions. Much of that testing can be done through automated tools so you should not have to ask end-users to all bang on the system at the same time, as we did in the olden days. Just make sure that the load testing is done, that it's done as early as possible to give the team time to address issues, and that it be done with the actual customizations in the product. Experience shows that a product may perform well right out of the box, but crawl miserably under inadequate customizations. Integrations are another notorious cause of performance issues.



   
Comments