Seilevel
Seilevel Home
Back to Blog Home - Requirements Defined

Wednesday, March 31, 2010

Software Requirements Models that Work Well Together – Decision Trees and Process Flow Diagrams

As Requirements Analysts we use Models to paint a picture of the Application we are defining for the developers and eventually to derive the final requirements they will build to. There are many Models that we use to define different aspects of the system – the People who influence the development of the application and eventually use it, the Data that is consumed, transformed and created by the application and the different Hardware and Software Systems that will be used to develop the application.

Some of these Model pairs like Org Chart and Data Flow Diagrams, while useful in their own right are not necessarily complementary. The Org Chart is useful for identifying the Actors and Influencers of the Application. The Data Flow Diagram depicts the flow and transformation of data as it moves from one process to another in the Application. While both these Models are useful and possibly required for your analysis, they are not complimentary. They describe different portions of the Application and using them together does not necessarily enhance your understanding of one or the other.

Other Model pairs work very well together in enhancing one’s understanding of the Application. One Model Pair that I often use together are Process Flows and Decision Trees. Decision Trees work very well in conjunction with Process Flows for the following reasons.

1. They help to simplify Process Flow diagrams by removing complex decision logic from the main flow.

2. Help keep the reader’s focus on the overall flow without getting distracted in the details of one portion of the flow.

3. Make the process flows easier to create and more “legible”. Flows with many decisions and branches are time consuming to create and harder to understand.

A common usage scenario of pairing Decision Trees and Process Flows is when the flows have validation steps. Validation steps are very common is transactional process flows like order creation, quote creation, product configuration and so on. The validation step itself is likely to be complex and will typically require the application to check if all the required data is provided and properly formed. For example, an order entry validation step will check for proper Customer Information, Address, Payment Information and so on. It is certainly not incorrect to put the validation steps into the Process Flow. Doing so can make the flow very busy and messy at this step in the process. I have found that readers do one of two things when confronted with this in a Process Flow and both are bad. They either focus on the “complicated part” and skim through the rest of the flow. Or they do the exact opposite. The net result is that the overall review suffers.

These days, I prefer to keep the branching in my Process Flows to a minimum and use Decision Trees in conjunction with them. As a general rule of thumb, if the Process Flow I am working on is reasonably complex, I try to restrict myself to just one Decision Box in a sequence. In the example above on Order Validation, I would have had just one Decision Box at the validation step titled “Order Valid” and moved forward from there. The different validation steps within the “Order Valid” decision, I would have depicted in a Decision Tree.

The benefits I have found from doing this are as below.

1. The Process Flows are easier to create.

2. The Decision Trees enable me to depict complex logic easily.

3. Splitting out these two aspects enables me to get both right.

4. Lastly, deriving requirements is easier.

Try it and let me know what you think.

Labels: , , , ,

Requirements Defined Newsletter Bookmark and Share

Monday, March 15, 2010

3 Basic Tips to Data Migration Requirements for your Software Project

3 Basic Tips to Data Migration Requirements for your Software Project

It's late stage of a project, so it's time to start worrying about your data migration requirements if you haven't already! If you have done it, good for you! You can just stop reading. But so many projects push this to the end and then panic when it doesn't go well. I have seen project after project have late deployments because the data was not properly migrated and tested.

Here are a few tips to consider:

1. User your data models to identify what data should be migrated. For example, you can use the boxes in Business Data Diagrams (BDDs) and you can use the data stores in Data Flow Diagrams (DFDs).

2. You can actually do a two-way check here - if you get data dumps and have fields that are not in your data models, then you may be missing a requirement.

3. Plan for a lot of time to test this. Surely someone will test the migration scripts themselves and someone is likely testing your code, but you also want to plan for testing the integration of the migrated data to the code. So often properties are not setup correctly and so the data doesn't show up in your software, even though it's in the database behind the scenes.

Labels: , ,

Requirements Defined Newsletter Bookmark and Share

Friday, June 26, 2009

RML™ Requirements Model 5 - The Data Flow Diagram (DFD)



The Data Flow Diagram (DFD) is a very useful part of the Requirements Modeling Language (RML™). The Structured Analysis Wiki contains a great explanation of how to create a DFD, so I’m not going to cover that information here. Instead, I’m going to provide one answer to the question “How do I know when to use a DFD?” This answer comes from my own (unique) view of the world, so some of you probably won’t relate to it, but others will--I’m sure there is at least one more out there…


Sometimes I have what I think of as an "equation" in my head. In vague terms, I may be thinking “Customer Data + Product Data + Input from Sales Rep + Taxes = Order”. But that's not really right, nor is the equation a good representation of the information.



And, the “equation” also misses a few other particulars about creating an order that I’d like to convey: Sales Reps update customer data, the Finance staff maintains the rules for the tax calculations, and orders flow to the Order Fulfillment system after creation.



The data flow diagram is great for representing all of this. Here’s my “equation,” expressed as a data flow diagram:




I have found that business users, as well as developers, react well to this model—it provides a “big picture” with which to begin a conversation about creating an order. It paints exactly the picture I want to convey and validate when I’m thinking “Customer Data + Product Data + Input from Sales Rep + Taxes = Order”.




Display this picture, and you’ll get some interesting questions and comments:


  • How is customer data populated initially?

  • Does the order fulfillment system update the order store with information about the fulfillment of the orders?

  • Where does the product data come from?

  • Are all of the tax rules manually entered, or is there also an electronic source for them?

  • And, maybe, “No, that’s wrong, updating customer data isn’t a separate process. Customer updates need to automatically flow out of changes made when creating the order.”

One note: it doesn’t have to be technically perfect to be useful. I often provide “conceptual” DFDs, in that I intentionally provide conceptual, but not technical information. For example, conceptually, there is a “products” data store. Technically, there may be multiple stores: product list, product descriptions, etc. The important thing is that they work together to provide product data. Developers and architects are very receptive to this; they understand I’m illustrating the behaviors of the system without defining the implementation (which, after all, is their job). Oh, and how does the audience know it is conceptual? I put the word “conceptual” in the title!



You may notice I didn’t number my processes. That’s because I rarely decompose them and many people tend to take the numbering as ordering. So, for my usage, numbering adds confusion rather than clarity.

Happy diagramming!

Related Articles:


Labels: , , , , , , ,

Requirements Defined Newsletter Bookmark and Share

Tuesday, June 23, 2009

Live from REFSQ: Deriving Information Requirements from Responsibility Models

Tim Storer from St. Andrews University in Scotland presented this paper. His underlying assumption is that in large scale socio-technical enterprise systems, you are constrained by the design of the platform you select, integration to other systems, and systems of systems factors. He contends that the functional requirements in these projects are more useful in the procurement process where you select the system to implement, and specifically less useful in the actual implementation. The reason for this is that the system is already constrained by behavior based on what you select. While I don’t entirely agree with the de-emphasis of functional requirements he implied, his overall point is absolutely valid in that you often are in a situation where you must configure the system and adapt business processes around the system.


This paper discusses how beyond the functional requirements, you also need what he calls “information requirements” which help you to configure the purchased system for the given context. These requirements tell you:



  • What information is needed

  • Who needs the information

  • Who produces information for the system

  • Flexibility for access to that information

  • Consequences of incorrect information flow

And these requirements influence platform configuration, organizational changes, and system integration.


To identify these information requirements, they use “responsibilities” as a starting point. They have defined “responsibility” as “a duty, held by some agent, to achieve, maintain or avoid some given state, subject to conformance with organizational, social, and cultural norms.” They are not goals, but rather more abstract and less formal. They are not concerned with specific types of agents. Then for each responsibility, they determine the resource needs, ultimately leading to information requirements.


In his example, a self service check-in terminal has responsibility of “provide boarding pass”. Then they can look at the resources needs– blank tickets, ticket database which is fed by ticket server, etc.


For the information resources, you then have to look at what happens if it the resource is unavailable, inaccurate, incomplete, late, and even early. And ultimately you get to the details mentioned above to form the information requirements.


What I found useful in this paper is helped validate some of our own thinking we are applying on projects in that we look at something very closely related with data requirements. We’ve written before about our People, Systems, Data (PSD) approach to discovering all requirements using visual models from RML™. In this case, if we break down the Data component of PSD in combination with the People component of PSD, we have something very similar to what Tim spoke to. Very briefly, we use the People models (e.g. Org Charts) to identify the people using the system and then look at what stories they need to execute in the system (e.g. User Stories). Now we can also look at the top-level data model (e.g. BDD – a Business Data diagram), and for each data entity in the diagram, we look at how it is:



  • Created

  • Viewed

  • Changed

  • Removed

  • Copied

  • Moved

(And yes, I deliberately did not call out the CRUD because I’m not a big fan of the acronym!). In doing this, we can actually complete the list of user stories and identify system integration points. Similar to above we would also use the user stories to cross-verify the BDD was complete. Our data actions closely map to the list in Tim’s presentation:



  • What information is needed -> the BDD entities

  • Who needs the information -> Who views it, changes it, copies it, moves it

  • Who produces information for the system -> How is the data entity created

  • Flexibility for access to that information -> Who views it

  • Consequences of incorrect information flow -> exceptions in the stories that come out of the list

Now in a recent example, we were working with a system that we did not know well. There were six main data objects and a list of about 10 user stories for the system. I wanted to validate whether the list of User Stories was complete, so I proceeded to walk through each of the 6 actions above against each of the six data objects. Not every data object could have those actions performed in the system, but it was useful to deliberately check each. In the end we identified about five new User Stories.

Labels: , , , , , ,

Requirements Defined Newsletter Bookmark and Share