Seilevel
Seilevel Home
Back to Blog Home - Requirements Defined

Wednesday, March 31, 2010

Software Requirements Models that Work Well Together – Decision Trees and Process Flow Diagrams

As Requirements Analysts we use Models to paint a picture of the Application we are defining for the developers and eventually to derive the final requirements they will build to. There are many Models that we use to define different aspects of the system – the People who influence the development of the application and eventually use it, the Data that is consumed, transformed and created by the application and the different Hardware and Software Systems that will be used to develop the application.

Some of these Model pairs like Org Chart and Data Flow Diagrams, while useful in their own right are not necessarily complementary. The Org Chart is useful for identifying the Actors and Influencers of the Application. The Data Flow Diagram depicts the flow and transformation of data as it moves from one process to another in the Application. While both these Models are useful and possibly required for your analysis, they are not complimentary. They describe different portions of the Application and using them together does not necessarily enhance your understanding of one or the other.

Other Model pairs work very well together in enhancing one’s understanding of the Application. One Model Pair that I often use together are Process Flows and Decision Trees. Decision Trees work very well in conjunction with Process Flows for the following reasons.

1. They help to simplify Process Flow diagrams by removing complex decision logic from the main flow.

2. Help keep the reader’s focus on the overall flow without getting distracted in the details of one portion of the flow.

3. Make the process flows easier to create and more “legible”. Flows with many decisions and branches are time consuming to create and harder to understand.

A common usage scenario of pairing Decision Trees and Process Flows is when the flows have validation steps. Validation steps are very common is transactional process flows like order creation, quote creation, product configuration and so on. The validation step itself is likely to be complex and will typically require the application to check if all the required data is provided and properly formed. For example, an order entry validation step will check for proper Customer Information, Address, Payment Information and so on. It is certainly not incorrect to put the validation steps into the Process Flow. Doing so can make the flow very busy and messy at this step in the process. I have found that readers do one of two things when confronted with this in a Process Flow and both are bad. They either focus on the “complicated part” and skim through the rest of the flow. Or they do the exact opposite. The net result is that the overall review suffers.

These days, I prefer to keep the branching in my Process Flows to a minimum and use Decision Trees in conjunction with them. As a general rule of thumb, if the Process Flow I am working on is reasonably complex, I try to restrict myself to just one Decision Box in a sequence. In the example above on Order Validation, I would have had just one Decision Box at the validation step titled “Order Valid” and moved forward from there. The different validation steps within the “Order Valid” decision, I would have depicted in a Decision Tree.

The benefits I have found from doing this are as below.

1. The Process Flows are easier to create.

2. The Decision Trees enable me to depict complex logic easily.

3. Splitting out these two aspects enables me to get both right.

4. Lastly, deriving requirements is easier.

Try it and let me know what you think.

Labels: , , , ,

Requirements Defined Newsletter Bookmark and Share

Monday, March 29, 2010

Software Project Issue Metrics

I love numbers, and so I'm always interested in what metrics we can pull from projects. Sometimes I pull them just because I'm curious, sometimes they actually help support opinions and gut feels, and most often they are useful to make decisions on your project.

On a recent project, I pulled metrics from our issue tracking system to look at the types of issues that were logged. I will first mention that we actually start tracking issues much earlier than most projects - we do it in the requirements phase.

For context, this snapshot was taken about 1 week before the initial deployment of the project.

The included:
  • Requirements questions - for any open questions about the requirements
  • Configuration - these are most relevant for out-of-the-box solutions that are heavily configured instead of developed
  • Test case - when QA test cases need to be updated based on misunderstanding of requirements, design, or changes to either
  • Functional - the functionality developed doesn't work as expected
  • Data - migrated or created data has errors, fields missing, or the wrong set was extracted from an existing system
  • Training - when defects are logged because a user or tester did not understand how the system should work
  • Change requests - every time a new enhancement is requested it is logged
  • Development ignored requirements - this is an interesting category that came out of the fact that some developers did not read the requirements when they developed.
  • Test data - we had a lot of errors in test data that was created that caused functionality to look like it didn't behave

A few interesting observations about the chart:
  1. You will never close all your issues so at deployment time, so it makes sense that there are still open issues (yellow).
  2. Good news is that there are fewer relative requirements questions, test case issues, and test data issues at deployment (yellow).
  3. Also good news in the closed issues (green), in that very few change requests were fixed and closed at deployment, relative to functional, data, and configuration issues.
  4. While a number of user experience issues are open at deployment (yellow), a large number were fixed as well (green). It is good to see this - it's an indication (hopefully) that only the most critical ones were closed.
  5. Of the overall issues (blue), the high level of requirements questions is because we started tracking them early in the project.
  6. The number of overall of test case issues is actually concerning (blue). That's a sign they were poorly written from the beginning or that the system changed significantly after they were developed - and notice that the change requests that were closed were low however here (green).
  7. Also in the overall issues, the number of change requests is extremely high if you look at it relative to functional issues (blue).
  8. In this project, the number of user experience improvements is high, but also not surprising, since we didn't put a lot of the UI design in place until late in the project (blue).
  9. Finally, the number of data defects open at deployment is probably too high (yellow) while it's not a huge percentage of the total defects (blue). The issue on that project was they didn't do the data-code merge until very late in the project so the issues spiked just before deployment.

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Friday, March 26, 2010

Software Requirements Traceability – How Low Do We Go?

Tracing software requirements throughout the project is considered a good practice in software engineering. There are numerous studies that have shown the benefit of traceability and how traceability can avoid defects in software projects.

The generally accepted practice of tracing software requirements includes elements such as architecture and other design components, source code modules, tests, help files and documentation. But how low should traceability go?

A project that I was on recently, the testing organization asked for traceability of the software requirements. The functional requirements are traced to the business requirements. The business rules and the UI data elements are traced to the functional requirements. As the project progressed, additional mappings to design, documentation and test cases will be added. All of the tracing is done in MS Excel, which is also problematic, but that is a different issue for a different post.

While I have traced software requirements before, I have never traced down to the UI data element level. Is there value in tracing down to this level? When I asked the testing organization why they desired this level of tracing, their answer was the standard, “to make sure that we have test cases that cover everything”. But is there really value in tracing down to this level?

I did some research , and could not find any direct references to tracing down to this level. My research did show that some military contracts and defense contracts may call for tracing to this level. But for projects outside of the military or defense world, it is rare.

Tracing to this level is expensive, no doubt about that, especially if the tracing is done properly. For example, a UI data element could be valid for a number of software requirements. Is it enough to trace it to the first software requirement where it is valid? Or is it more proper to trace it to every software requirement where it is valid? Given a basic UI data element such as customer name, it could trace to quite a few software requirements in a large system.

What if the software requirements change? Maintaining a valid traceability can be challenging, especially if the tracing is done manually (which, in this case it is). Every time a software requirement is changed, or a UI data element is added, deleted, renamed, the traceability matrix should be updated. The larger the system, the more complex and cumbersome this becomes.

So while I can applaud the desire of the test organization to ensure that every UI data element within the system is tested and valid per a software requirement, is it worth the cost?

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Wednesday, March 24, 2010

How to Shoot Yourself in the Foot

7 ways to do software requirements poorly to set your project up for failure and what to do instead.

Why spend precious project cycle time on software requirements rather than just getting down to business on design and implementation? Back in the 1970’s Barry Boehm published extensive data from large projects showing that an error created early in the project, for example during requirements specification, costs 50 to 1,000 times as much to correct late in the project as it does to correct (or avoid) close to the point where it was originally created. His landmark study has been confirmed over and over again on countless projects in the decades since.


Why are errors so much more costly to correct downstream? There are two main reasons: project artifacts to be corrected increase over time and more people need to get involved in defect correction as the life cycle progresses. A poor requirement or an unneeded requirement will become the ‘tip of the iceberg’ for problems later in the project. One software requirement can easily turn into several design diagrams. One design diagram can turn into hundreds of lines of source code, scores of test cases, many pages of end-user documentation, help screens, instructions for technical support personnel, marketing collateral, etc. It is obvious that correcting a mistake at requirements time will be much less costly than correcting all the manifestations of the requirements problem downstream. Even more expensive is developing all of the downstream artifacts for an unneeded requirement and then debugging problems with it before realizing it had little or no business value for the release.


A requirement defect found and fixed during the requirements specification involves those eliciting and designing the requirements and the customer contacts. A requirement defect found and fixed after the product has shipped will involve many more groups and people such as call center, customer support, field support, testing, development, design and requirements. Just think of all of the forms to fill out, emails, phone messages and meetings that could be involved with a problem at the end of the life cycle as compared to the beginning of it.


You might say, “I am in an Agile shop and we send a release every 4 weeks to our customers so we can’t be hurt as much as on a waterfall project.” Well, Agile project management approaches can certainly help limit the damage by short duration time-boxing, but even there the cost is greater at the end of an iteration than at the beginning. Blowing 2 weeks of staff time on an Agile sprint for a requirements bug that could have been corrected or avoided on day 2 is still expensive and to be avoided.

Over my next few blogs I will go into the following 7 ways to do software requirements poorly to set your project up for failure and what you could do instead


  1. Short-change the time on requirements or don’t do them at all
  2. Don’t listen to your customer’s needs
  3. Don’t use models
  4. Use weasel words
  5. Don’t do requirements traceability
  6. Don’t prioritize requirements
  7. Don’t baseline and change control requirements
Don’t shoot yourself in the foot on your next project!


OR?













Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Monday, March 22, 2010

So you want to get a job as a Business Analyst? Here are the things we look for.

We’ve written a few blog posts about how to be a sucessful BA.

But people often ask; what do we look for in a BA?

One of our biggest challenges is finding good Business Analysts and Product Managers. We interview a wide variety of individuals but it’s really tough to find good hires. Since we specialize in such a detailed field we really have to look for specific skills. Having relevant experience and having consulting experience does not suffice.

Our interview processes is also a bit more unusual than most companies. We start off by having the candidate fill out a questionnaire, we conduct a phone interview. Then we have two onsite interviews that are more specific to job required skills for the business analyst and product management job.

This process is more thorough than most; however we feel we can get a better feel for the candidates’ background and culture fit by doing these series of evaluations.

During this whole process we look for certain skills at each step. If a candidate does poorly at any of these steps, we won’t move him/her on though the remainder of the process.

All throughout the interview process we ask ourselves:

Do they have strong analytical skills?
Have they written software requirements documents before?
Are they detail oriented?
Are they a good cultural fit?
Do they have any consulting experience?
Are they coachable/ trainable?
Are they personable?
Are they professional?
Would I like to work with this person?

Answering ‘yes’ to all of these is a rare find.
Most important to us: we have to have BA’s and Product Managers who possess strong analytical skills, are very detail oriented, solid technical skills and are someone we would like to work with.

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share