Seilevel
Seilevel Home
Back to Blog Home - Requirements Defined

Wednesday, October 21, 2009

Using Use Cases To Create Test Cases

As part of my "Live from BAWorld: Boston" series, I attended a talk Monday by Matthew Leach of Doreen Evan Associates called "Leveraging Multi-Level Use Cases for Testing and Other Ways to Obtain Greater ROI on your Business Analysis Investment".

His talk went into great depth about how you could use use cases on your project in multiple ways, looking at different levels of detail in use cases. He quoted a study from VokeStream that indicated 76% of people surveyed manually build test cases still.

This is the one point I also wanted to emphasize the importance of: re-use your use cases to generate test cases, particularly user acceptance test scripts (UAT scripts). At some level this seems obvious to me, but I don't think it is all that obvious after all based on the above study, Matthew's experiences, and my personal ones as well! On a recent project I worked on, the business came to us to talk about how awful the integration and unit test cases were - that they just would not work for UAT. My immediate thought was "well of course not, those aren't meant for UAT". Apparently QA had told them to write their UAT scripts from these test cases. That's almost as challenging as writing them from code! So we walked them through how we could take the use cases we had written (which the existing test cases were generated from) and easily translate those into UAT scripts.

If you think of your use case having a happy path and alternative paths, you would want to blow out each of those paths into at least 1 test case each, by adding concrete data to the use case. So for example, if there is a step for the user to input "shipping information", then in the UAT script, you would want to supply actual shipping information including a specific name and address that would be in the test data set. It's important during UAT to also test the alternate and exception paths - to ensure the not-so-happy path and errors are handled to the business' satisfaction. That said, it's also unrealistic to think your users have time to test all possible paths. To mitigate this, I have two suggestions.
  1. Pick the most important UAT scripts to test. You have to decide what "important" is, but it would be wise to look at the use cases that add the most business value and/or are most frequently used.
  2. Use your BAs during UAT - particularly for the less important test cases that the users can't it.

Labels: , , , , , ,

Requirements Defined Newsletter Bookmark and Share

Monday, September 08, 2008

Ask Joy:How Do You Know if Your Software Requirements are Testable?

From our Spring newsletter, we received a question to the Ask Joy column. Here's the question and response!

Question: “How do you determine if a requirement is testable and please answer in concise, understandable English.”

Answer: Hi Jon, I sat down to answer your question, and I honestly found myself a bit nervous about the second part of the it – in which you asked me to write in “concise, understandable English”. My first reaction was that I don’t know any other language well enough to pull off an answer, so the English part is easy. But my nervousness came from the concern that you may not think my answer is concise and understandable.

And there-in lies the problem, this question isn’t measurable in itself! When we use plain English, while it’s easy to read and get the gist of the meaning, it certainly is often up for multiple interpretations.

This is dangerous to software development, because if there are multiple interpretations, there is no guarantee that the solution will be what you asked for. Therefore, we should set out to write requirements that are precise, unambiguous, clearly understood…and testable.

So how can we then know if a requirement really is testable? Simply put, you must determine how you would measure the success of the requirement in the solution, and then decide if you were to actually measure it, that you would have a solid “yes” or “no” answer as to whether it passed or failed. If the answer is not black and white, then it’s not testable.

For further related reading:

Sometimes requirements are vague, and you can find more on this topic in a post that references an INCOSE 2008 paper on this very topic .

Roger Cauvin writes about the idea that requirements need to be testable in principle, but it’s less important if they are actually testable in practice.Scott at Tyner Blain further discusses why we care about testable requirements.

Just remember: in the end, it’s not really about the requirements and whether they are testable or not, it’s about the solution you get out on the other side.Thank you Jon for the question!
Please feel free to comment below if you’d like to continue the conversation.

Labels: , ,

Requirements Defined Newsletter Bookmark and Share