Seilevel
Seilevel Home
Back to Blog Home - Requirements Defined

Wednesday, March 29, 2006

User Acceptance Testing Is Hardly Testing

A good project team relies on many kinds of testing to help ensure a high-quality release. Developers use unit and integration tests to verify functionality of their individual modules. Quality assurance personnel use performance and reliability testing to verify service levels of the system as a whole. While each of these techniques serves a unique purpose on the project, they also share a common goal - find defects. However, this is something your user acceptance tests should never do.

Numerous studies have shown that fixing defects found late in a delivery cycle can carry an extremely high cost. If serious problems are found during User Acceptance Testing (or UAT), they can easily translate into significant budget overruns, slips of the schedule, or even project cancellation. Issues must be identified and addressed as early as possible. By the time UAT rolls around, there should be nothing major left to uncover. User acceptance testing should be little more than a pat on the back of your development team for a job well done.

A wide array of techniques can be used throughout a project to help eliminate serious issues in UAT. Management tools, comprehensive automated tests, and other strategies all have their place, but this is not enough to predict success. Requirements engineering is particularly important, but use cases and specifications can only take your project so far. Ultimately, there is no substitute for getting a working system in the hands of users. The key then becomes to leverage this in a regular feedback loop between the customer and delivery team. This feedback drives refinement of the requirements and provides ongoing validation that the system will meet customer expectations.

One practical way to forge a strong feedback loop is through frequent internal releases. Internal releases offer a way for the customer to acquire hands-on evaluation of system functionality and usability before deployment in a live environment. For example, consider delivering interim builds once per week. Whatever timeframe you choose, the success of the feedback loop depends on the following criteria:


  • High Quality Builds: the builds must be robust enough for users to interact with the system and provide meaningful feedback. In fact, an interim release that crashes or suffers from performance problems can do more damage than good. Besides losing valuable feedback, a low-quality build can cause users to lose interest in the feedback process altogether.

  • Customer Commitment: the customer must commit to reviewing each interim build and providing feedback to the development team. The review process is lightweight, but feedback must be gathered and analyzed quickly for incorporation in the subsequent release.

  • End User Involvement: end users of the system must be involved in the review process. Business owners should participate in the evaluation, but they should not act as surrogates for the end-users.

  • Review Environment: the builds should be deployed to a staging environment that is as close to a replica of the production environment as possible. Although feedback from any functional version of the system can provide some value, the integrity of the feedback will be highest when users work with a "live" system.



The actual review process varies between teams. Some teams can be very productive with a simple list of functionality to be examined. This is all they need to conduct ad-hoc testing, pick out user-interface issues, or identify business concerns. Other teams may require a more structured approach. In either case, the feedback should be reviewed and incorporated into the next weekly build. Over the course of the project, each interim build should have more functionality and be closer to the "final vision" for the system (or at least the next production release).

A natural extension to frequent internal releases is frequent production releases. For example, if a project is producing internal releases once per week, a production release might be made approximately once per month. By the time the production build is assembled, any significant issues should have been resolved through the weekly reviews. This leads to UAT being more about customer sign-off than exploratory testing at the 11th hour of your project. Iterative delivery and feedback can keep your UAT cycle focused on user acceptance. Leave user acceptance testing at the door.
Requirements Defined Newsletter Bookmark and Share

Sunday, March 26, 2006

People, Systems and Data analysis

When looking at an existing application ecosystem for the first time, whether for an upgrade to the functionality or for a complete system migration, it is sometimes difficult to know where to begin in our quest to understand the ecosystem. To add to the problem, the most common state that we see is that the existing ecosystem is not documented at all. One method that you can use to help to ensure that you understand the ecosystem is to look at people, systems and data (PSD). Examining these three areas and the interplay between them will create a boundary around the ecosystem that will help you to fully understand how the ecosystem functions. The key value of this method is that it relies on elements that are easy for people to recall but yet still form complete boundaries around the ecosystem. For example, once you use an organizational chart or even the actual list of logins to identify the users, you can be sure that there are no other users because no one else has access to the system.

When looking at people (actors), the place to start is organizational charts and interviewing actors to find other actors. It is pretty easy to identify all of the actors because people are used to thinking about the people that they work with. Organizational charts include not only those who use the system, but also those who operate the system and those who built or maintain the system. A representative lead from each group should be identified who can act as a single decision maker to represent the user population and who can help to identify those with special knowledge of a facet of the system. In addition, you need to ensure that any subgroups are identified as well. For example if there are two types of sales reps that use the system in different ways, they should be identified as different actors. Other ways to bound the population of actors include looking at the database of all users who have access to the system, identifying all developers who have checked in code into the change control system and all operations team members who maintain the system. Once you have identified the complete user population in this way, you have created a firm boundary that limits the unknowns. The next step is to begin meeting with the various actors and figuring out how they interact with the system. These interviews will result in the generation of high level use cases. By their very nature, the use cases will not be a comprehensive description of the system, they will simply be a guide to how the people do their jobs in relation to the system. Chances are you will not get all of the use cases on the first try, but in conjunction with looking at systems and data, you will follow an iterative process that will get you to a complete model of how the system works today.

When looking at systems, the goal is to identify all of the major applications that participate in the ecosystem. Generally speaking it is easy for users, operators and maintainers to list the systems, however it is much harder for them to be able to detail all of the interactions between systems without some organized structure. By creating system context diagrams you can create a visual framework that will help to focus on the relationship between any two systems, so that only the relationship between those two systems has to be considered at a time. This makes it much easier to capture all of the information about that relationship. The system context diagram simultaneously shows all systems and, at a high level, what the interactions between the systems are independent of sequence. The list of all systems provides another boundary that is easy to achieve. Once all the pertinent systems have been identified, screen shots from those systems can be used to help drive discussions around the specific business rules and functionality of the systems. In addition, direct access to the existing systems can be very helpful. Finally, to ensure that all the business rules are captured, examination of the source code is generally required. However as this step is very time consuming, it should be used as a way to validate an existing list of business rules rather than to generate the list of business rules in the first place.

Finally, users often times think in terms of the data that they create and the data that they use. Data is only ever created, transformed or consumed, so once you have identified an object, you can ask the question of how did it get created, how is it used and what happens to the data between the time it was created and the time it was used? Looking at data entry screens and reports help to identify how the data is created and used. Since there are a finite number of these screens and reports, you can be certain that they form a complete boundary around the data in the system. The next step is to determine the data flows through the system, data dictionaries and business rules that govern the entry and transformation of the data. You can use system context diagrams to help generate the data flows as you can identify in which systems data is created and then follow the flow of the data through various applications until it is used.

The key value of looking at people, systems and data is really in looking at the interplay between the three. Data flows will help to identify use cases (i.e. how does someone use this data?). System context diagrams help to drive discovery of the data flows. Screen shots help to trigger memories of business rules that apply to the data. Use cases or reports may help to identify missing systems or discover missed data objects. By working iteratively through people, systems and data you can feel confident that you have a completely bounded understanding of your application ecosystem.
Requirements Defined Newsletter Bookmark and Share

Thursday, March 23, 2006

Requirements Documentation Standards

We use a set of requirements documentation standards to generate consistency in our writing styles. They should be applied to the documentation produced in the requirements phase of a project, including the requirements, use cases, diagrams and process flows. Instead of writing in individual preferred styles, the entire team writes in the style defined by the standards.

The notion of requirements documentation standards is similar to coding standards for a development team. For development teams, they will have agreed upon conventions such as naming variables without the _ character, making data private and what code comments contain.

Requirements documentation standards can be valuable to a product management team’s overall goals of producing concise, readable, easy to digest, quality requirements. The standards promote consistency in the documentation efforts of a team. When a multi-person team is producing documentation for a customer, the use of standards can result in a multi-author document that appears to have been written by a single person.

Here are a few examples of what I mean by requirements documentation standards:
  • Requirements should take the form of “(subject) shall (action verb) (observable result)”. For example, "System shall display a list of all containers of the requested chemical that are currently in the chemical stockroom”.
  • Use the words “User” and “System” as subjects in active voice sentences.
  • Use case names should be of the format (Perform Action on Object). For example, “Add Item to Cart”. If the same functionality is described in multiple use cases because the actor is different in each, then add “for (actor)" to the end of the use case names.
  • Each step in the use case should describe a single, discrete action by the user or the system. Multiple actions should not be combined into a single step.
The consistency gain from standards will improve readability, including how quickly someone can understand what is written. Anyone on the receiving end of the documentation can become familiar with one style, so the time spent reading the documentation will be focused on content rather than learning the author’s language. Similarly, our team has a practice of reviewing each other’s documentation, where similar writing standards across teams can simplify the review process. Writing in similar styles allows someone new to the project context to review the document and focus on the content as opposed to the language choices.

As with code, the original authors of requirements documentation are often not the ones to maintain it throughout the project. Therefore, if written in a consistent style, it will be easier for a new product manager to understand the documentation.

Developing documentations standards can lead to additional benefits to the overall team. The team can work together to develop the standards for a given organization or project. Once the standards are decided, time will not be wasted on silly debates over things like the use of active or passive voice, the number of spaces between sentences or how to name use cases. The more productive suggestion is to have the debate once (or periodically), decide as a team on a standard and move on to implement it.

There are some potential downsides that need to be considered if developing a requirements documentation standard. First, enforcing such standards will take project time, and in fact, they may get ignored in many cases. The benefit that applies to doing internal reviews is that efforts can be focused on reviewing content over language, grammar and format. However, if the standard is not followed, excess time is spent in the review because the document is being reviewed against the standard as well as the content.

Realistically, such standards can create more headaches then they are worth if not done in a pragmatic manner. With this in mind, it is important to realize that standards should not be viewed as immutable. In fact, a given team needs to adapt the standards to what is most appropriate for their project. The project team needs to realize that even if each person makes a “standards” mistake 2% of the time, it is not going to lead to project failure.

The smaller the size of the team, the less important and thorough the standards need to be. For just one person working on a project’s deliverables, there can still be benefits to documentation standards, particularly if the audience is reading other product managers’ documentation (such as for internal team reviews). However, the most important thing in this case is that the individual is consistent within the documents.

There are a few suggestions I can make in developing a standard for requirements documentation. First, develop them as a team, so that the standards are not forced on anyone who is not bought into them. Review them periodically and update as appropriate for the state of the team. Be sure to keep the standards simple and useful, which leads to the most important tip - provide a rationale behind each one. If there is obvious value in each of the standards, they will more likely be followed.
Requirements Defined Newsletter Bookmark and Share

Friday, March 17, 2006

Requirements Model 4 - The Data Dictionary

Data dictionaries have been around for quite a long time. I have a book on analysis that was written in 1979 that covers them in great detail. Unfortunately, most of the techniques in that text focus on how to collate and manage the data rather than what data to include. There’s nothing quite like reading about how to organize your data dictionary using a card catalog file to make you realize how far we’ve come in terms of data management and organization.

The concept of a data dictionary is just as valid today as it was 27 years ago. A data dictionary is a model that allows you to look at the properties of all the data in your system in a very analytical and structured fashion.

To construct a data dictionary for your system, you first need to identify what the business objects are. This is really a very crucial step in the process. It is tempting for some users (and analysts) to think in terms of system objects and database tables, but that is not what you should be focusing on here. Instead, you should think about the real-world objects your system deals with.

For example, you might be working on a shipping management system. This system focuses on the tracking and routing of packages. Those packages are real-world objects that have tangible attributes such as weight, dimensions, recipient address, and return address. By focusing on these objects, you drive towards the real business requirements and not some predefined implementation concept.

To construct the table itself, you need three things for each business object – the attributes (phone number), the characteristics of each attribute (length of phone number), and the values of each characteristic (phone numbers must be at least 10 characters in length).

This step is where the value in producing a model becomes clear. It takes some extra time to sit down and think of all of the characteristics you want to specify for each attribute, but the payoff is that once you have these defined for one object, you can quickly fill out the values without having to worry about each individual attribute. You can also use the characteristics for one object as a starting point for the other business objects. For example, my business users have told me that the number of decimal places of each weight value tracked by the system is very important for monitoring and reporting. It stands to reason that other objects and attributes might require the same level of specification. If you figure it out once, you can use it in many places.

Why use these tables? It all goes back to the idea of structure. Without the table, you aren’t thinking in terms of objects, attributes, characteristics, and values. Everything is ad-hoc and that can lead to gaps and mistakes.

Would I use these on every project? No way. This kind of information is way too detailed for many applications. When your system is very data driven, however, a thorough analysis of data properties can lead you to many hard-to-find requirements.
Requirements Defined Newsletter Bookmark and Share

Monday, March 13, 2006

Patterns for two-year-olds

I was out walking with my two-year-old son last week when we crossed the street next to a person on a motorcycle. From the running commentary in the stroller emerges “the man on the motorcycle is going to school.”

Umm…..huh? “Hey bubba, how do you know the man is going to school?”

“Because he is wearing his backpack.”

Patterns, a topic that was covered on the blog just a few weeks ago, was having its power illustrated to me first hand. My son, who loves wearing his brand new backpack to school, was vocalizing one of the key patterns from his world. Patterns are so simple that a two-year-old can pick up on them with zero prompting, yet so complex that hordes of engineers are still spending millions of man-hours trying to teach them to computers.

Although I enjoyed the post that discussed Yahoo’s design patterns, it made me realize that my understanding of patterns is based purely on hearsay as I have never read any source material on the subject. I went searching for a decent primer on the topic and came across this post by Brad Appleton.

Brad starts his patterns essay with what I think is an excellent definition of the concept:
A pattern is a named nugget of instructive information that captures the essential structure and insight of a successful family of proven solutions to a recurring problem that arises within a certain context and system of forces.
Brad goes on to provide some characteristics of design patterns, this time quoting from James Coplien’s work on the subject. According to Cope, a good pattern does the following:
  • It solves a problem
  • It is a proven concept
  • The solution isn’t obvious
  • It describes a relationship
  • The pattern has a significant human component
From there, Brad dives deep into the source material and key players behind the pattern revolution including the GoF, GoV, and numerous other engineers who have advanced the cause. It is clear that what started out as architect Christopher Alexander’s grand attempt to enable every citizen to design and construct their own home has morphed into a driving concept in fields ranging from computer science to organizational behavior. I will not attempt to go into the source material in any greater depth here because I know I cannot do it justice, but I would urge anyone that is interested in the world of design patterns to use Brad’s paper as a jumping off point.

Why are patterns so powerful? It’s simple – because this is the way we are wired as human beings. Pattern recognition is the first higher order skill we use as infants when we learn to recognize our mother’s face, and it is arguably the core mental ability that plays the biggest role in molding our adult selves from the lump of clay that represents our childhood experience.

What this means for me from a requirements perspective is that although their power ensures that patterns are a topic that I will keep on my list of ongoing research topics, it is only with the realization that the strength of patterns can often hurt us in our software requirements efforts. Where is the danger? Unfortunately, the human mind is adept at identifying patterns even in cases where there are none. How many times has a requirements engineer heard the phrase “this is exactly like XYZ” only to learn through careful elicitation that it is exactly like XYZ 10% of the time but nothing like XYZ the other 90%. As we strive to create a library of requirements patterns we must pay great heed to Brad’s observation that “documenting good patterns can be an extremely difficult task” and not fall into the soothing hype of patterns being “sugar and spice and everything nice.”
Requirements Defined Newsletter Bookmark and Share

Tuesday, March 07, 2006

Offshore Development Part 1

Your company has decided to take the leap and begin migrating some of your development offshore. As a business analyst/product manager, I have good news and bad news for you. The good news is that management will probably realize how important your job really is. The bad news is that your job just got a lot harder.

In recent history, most software development was specified and implemented by colocated teams. The developers could just walk down the hall to the product managers and say "hey look at this" and the product managers would get an instant demo. This isn't necessarily a bad way of doing software development and is at the heart of Extreme Programming. However even when the teams were colocated, on very large projects, this model never did work very well. With distributed teams, it is much harder to communicate between the business and the development teams and this difficulty has resulted in the early failure of attempts by many companies who have tried to move their development offshore. The good news is that makes software requirements (and you) more indispensable than ever. Extreme Programming threatened to do away with business analysts altogether. Thankfully most companies are realizing that XP doesn't work for large industrial strength projects or with distributed teams.

The bad news is that your job just got a lot harder. With offshore development, everyone knows that communication is a challenge. This has a tendency to drive teams to use a waterfall model instead and away from the better iterative models. They develop the software requirements (or even worse rely on the developers to create the requirements) and throw them over the wall. The developers design and code and throw their product over the wall to QA. Once QA is done with it, it comes back to product management for validation. This obviously is a sure fire recipe for disaster. As we have discussed in other blog posts, the waterfall model does not work well even in colocated situations. However, even the iterative models that we prefer are made significantly more difficult when dealing with offshore teams.

Here are some of the key software requirements related issues:
1) Lack of developer feedback in evaluating the requirements.
2) Difficulty communicating the requirements due to subtle language or cultural barriers.
3) Written requirements that might be fine for onshore projects are insufficient for offshore projects.
4) Disconnect of the development team with the business stakeholders.
5) Slow response times in responding to requirements issues.

In Part 2 we will discuss ways to improve the way you develop and use requirements with offshore development teams.

Here are a couple of articles discussing the difficulty of offshore development:

Lifetime fitness offshoring failure
http://www.networkcomputing.com/showitem.jhtml?docid=1421f3

Cost savings of offshore development
http://www.cioinsight.com/article2/0,1540,1837328,00.asp

Powerpoint discussing reasons for offshore failure
http://www.unanet.com/news/events/2004/LeverPoint-Unanet%20Seminar%20v16.ppt

Joel on Software discussion about offshore projects
http://discuss.fogcreek.com/joelonsoftware/default.asp?cmd=show&ixPost=51759

Bertrand Meyer article on offshoring
http://se.ethz.ch/~meyer/publications/computer/outsourcing.pdf
Requirements Defined Newsletter Bookmark and Share

Thursday, March 02, 2006

Gathering Requirements for Migration Projects (Part 2)

On Tuesday I began an overview of the processes behind gathering requirements for a migration project and how these activities differ from a system with new functionality. I began by talking about scope, understanding business needs and working with end users. Although these activities have a lot in common with the approach for a new system, there are some key differences that were uncovered. Today, I am going to finish the discussion by drilling into a couple of additional areas that differ significantly – discovering the end-to-end functionality and working with IT.

Discovering the end-to-end functionality:
As with any project, it is important to capture the entire set of functionality. One of the great advantages of a migration project is that you do have an existing system to work from, including user and system interfaces. For user interfaces, the menus can be used to decompose the functionality. The existing screens can be clicked through to discover the alternate paths through the system. For system interfaces, you can get a complete list of the interfaces and what data is passed between them to ensure they are covered in the requirements.

In meeting with the end users to understand how they use the system, you can easily capture use cases. With new functionality, use cases are a crucial model to discovering all the functional requirements, though that advantage is less notable in migration projects. The use cases are still valuable to give the full picture of how the functionality fits together to accomplish user goals, and they provide an organizational structure for the requirements. The significant difference in writing the use cases is that there should be more reliance on using the existing user interface design.

In the case where no one knows exactly how something works, there is a convenient option to just look at the code and the database schema to understand the actual implemented business rules. In fact, theoretically, the development team could just use the original code to develop the new system. Though realistically, this is dull and error-prone for a developer.

Working with IT:
In a migration project, it should be expected that there is far greater involvement from IT in the requirements phase of the project. In discovering new functionality needs, IT may not be fully engaged early in the project to avoid stifling the creativity of the business users. However in a migration project, it is crucial to engage them as early as possible, and there is absolutely no risk to that involvement.

The development team can help with a technical decomposition of the system functionality. They will have a clear view of the data elements and ensuring those are all well understood in the existing system. For example, in the migration of a financial system, there are many underlying matrices of data and business rules. The business users cannot recall the all of specific detailed business rules around those matrices. It would be far more efficient for a developer to look at the existing code to help understand the specific matrices and the associated behavior.

One of the great risk mitigations for a migration project is that the development team will be looking at the code of the existing system and therefore it can serve as one last checkpoint to ensure functionality is not overlooked.

Discovering what is not used:
The scope of a migration project should include identifying functionality that is not used, so that the development team does not waste cycles building it. This is precisely why defining the requirements is not as simple as just reusing the original requirements specification or having the development team build it straight from the existing system code.

A suspect list of unused functionality will be discovered in the user interviews. For example, if users are asked “What happens if you select this option on the screen?” and there are consistent answers like “I don’t know, I never do that”, then this is functionality to be flagged for removal. In an ideal world, IT may be have data on what screens are never accessed or what data elements never are updated in the existing system.

It may be difficult for the business to sign off on saying that a piece of functionality is not used. However, that is exactly what is necessary. Once there is that list of target functionality to be removed in the new system, every single one of the identified user groups of the system must be interviewed to confirm the list.

Resource skill set:
I think any great product manager can identify the requirements for a migration project. However, this is one type of project where a product manager with a technical background can be a great advantage. Such a product manager will have the capability to actually read the code and architecture diagrams if required to gather the requirements.

In the end, certainly there are many similarities in the requirements efforts of migration projects and new functionality projects. However, there are some important differences, and it is critical to the project’s success that these variations are recognized before starting the requirements phase of the project.
Requirements Defined Newsletter Bookmark and Share