Seilevel
Seilevel Home
Back to Blog Home - Requirements Defined

Wednesday, March 29, 2006

User Acceptance Testing Is Hardly Testing

A good project team relies on many kinds of testing to help ensure a high-quality release. Developers use unit and integration tests to verify functionality of their individual modules. Quality assurance personnel use performance and reliability testing to verify service levels of the system as a whole. While each of these techniques serves a unique purpose on the project, they also share a common goal - find defects. However, this is something your user acceptance tests should never do.

Numerous studies have shown that fixing defects found late in a delivery cycle can carry an extremely high cost. If serious problems are found during User Acceptance Testing (or UAT), they can easily translate into significant budget overruns, slips of the schedule, or even project cancellation. Issues must be identified and addressed as early as possible. By the time UAT rolls around, there should be nothing major left to uncover. User acceptance testing should be little more than a pat on the back of your development team for a job well done.

A wide array of techniques can be used throughout a project to help eliminate serious issues in UAT. Management tools, comprehensive automated tests, and other strategies all have their place, but this is not enough to predict success. Requirements engineering is particularly important, but use cases and specifications can only take your project so far. Ultimately, there is no substitute for getting a working system in the hands of users. The key then becomes to leverage this in a regular feedback loop between the customer and delivery team. This feedback drives refinement of the requirements and provides ongoing validation that the system will meet customer expectations.

One practical way to forge a strong feedback loop is through frequent internal releases. Internal releases offer a way for the customer to acquire hands-on evaluation of system functionality and usability before deployment in a live environment. For example, consider delivering interim builds once per week. Whatever timeframe you choose, the success of the feedback loop depends on the following criteria:


  • High Quality Builds: the builds must be robust enough for users to interact with the system and provide meaningful feedback. In fact, an interim release that crashes or suffers from performance problems can do more damage than good. Besides losing valuable feedback, a low-quality build can cause users to lose interest in the feedback process altogether.

  • Customer Commitment: the customer must commit to reviewing each interim build and providing feedback to the development team. The review process is lightweight, but feedback must be gathered and analyzed quickly for incorporation in the subsequent release.

  • End User Involvement: end users of the system must be involved in the review process. Business owners should participate in the evaluation, but they should not act as surrogates for the end-users.

  • Review Environment: the builds should be deployed to a staging environment that is as close to a replica of the production environment as possible. Although feedback from any functional version of the system can provide some value, the integrity of the feedback will be highest when users work with a "live" system.



The actual review process varies between teams. Some teams can be very productive with a simple list of functionality to be examined. This is all they need to conduct ad-hoc testing, pick out user-interface issues, or identify business concerns. Other teams may require a more structured approach. In either case, the feedback should be reviewed and incorporated into the next weekly build. Over the course of the project, each interim build should have more functionality and be closer to the "final vision" for the system (or at least the next production release).

A natural extension to frequent internal releases is frequent production releases. For example, if a project is producing internal releases once per week, a production release might be made approximately once per month. By the time the production build is assembled, any significant issues should have been resolved through the weekly reviews. This leads to UAT being more about customer sign-off than exploratory testing at the 11th hour of your project. Iterative delivery and feedback can keep your UAT cycle focused on user acceptance. Leave user acceptance testing at the door.
Requirements Defined Newsletter Bookmark and Share

0 Comments:

Post a Comment

<< Home