Seilevel
Seilevel Home
Back to Blog Home - Requirements Defined

Wednesday, March 31, 2010

Software Requirements Models that Work Well Together – Decision Trees and Process Flow Diagrams

As Requirements Analysts we use Models to paint a picture of the Application we are defining for the developers and eventually to derive the final requirements they will build to. There are many Models that we use to define different aspects of the system – the People who influence the development of the application and eventually use it, the Data that is consumed, transformed and created by the application and the different Hardware and Software Systems that will be used to develop the application.

Some of these Model pairs like Org Chart and Data Flow Diagrams, while useful in their own right are not necessarily complementary. The Org Chart is useful for identifying the Actors and Influencers of the Application. The Data Flow Diagram depicts the flow and transformation of data as it moves from one process to another in the Application. While both these Models are useful and possibly required for your analysis, they are not complimentary. They describe different portions of the Application and using them together does not necessarily enhance your understanding of one or the other.

Other Model pairs work very well together in enhancing one’s understanding of the Application. One Model Pair that I often use together are Process Flows and Decision Trees. Decision Trees work very well in conjunction with Process Flows for the following reasons.

1. They help to simplify Process Flow diagrams by removing complex decision logic from the main flow.

2. Help keep the reader’s focus on the overall flow without getting distracted in the details of one portion of the flow.

3. Make the process flows easier to create and more “legible”. Flows with many decisions and branches are time consuming to create and harder to understand.

A common usage scenario of pairing Decision Trees and Process Flows is when the flows have validation steps. Validation steps are very common is transactional process flows like order creation, quote creation, product configuration and so on. The validation step itself is likely to be complex and will typically require the application to check if all the required data is provided and properly formed. For example, an order entry validation step will check for proper Customer Information, Address, Payment Information and so on. It is certainly not incorrect to put the validation steps into the Process Flow. Doing so can make the flow very busy and messy at this step in the process. I have found that readers do one of two things when confronted with this in a Process Flow and both are bad. They either focus on the “complicated part” and skim through the rest of the flow. Or they do the exact opposite. The net result is that the overall review suffers.

These days, I prefer to keep the branching in my Process Flows to a minimum and use Decision Trees in conjunction with them. As a general rule of thumb, if the Process Flow I am working on is reasonably complex, I try to restrict myself to just one Decision Box in a sequence. In the example above on Order Validation, I would have had just one Decision Box at the validation step titled “Order Valid” and moved forward from there. The different validation steps within the “Order Valid” decision, I would have depicted in a Decision Tree.

The benefits I have found from doing this are as below.

1. The Process Flows are easier to create.

2. The Decision Trees enable me to depict complex logic easily.

3. Splitting out these two aspects enables me to get both right.

4. Lastly, deriving requirements is easier.

Try it and let me know what you think.

Labels: , , , ,

Requirements Defined Newsletter Bookmark and Share

Monday, March 29, 2010

Software Project Issue Metrics

I love numbers, and so I'm always interested in what metrics we can pull from projects. Sometimes I pull them just because I'm curious, sometimes they actually help support opinions and gut feels, and most often they are useful to make decisions on your project.

On a recent project, I pulled metrics from our issue tracking system to look at the types of issues that were logged. I will first mention that we actually start tracking issues much earlier than most projects - we do it in the requirements phase.

For context, this snapshot was taken about 1 week before the initial deployment of the project.

The included:
  • Requirements questions - for any open questions about the requirements
  • Configuration - these are most relevant for out-of-the-box solutions that are heavily configured instead of developed
  • Test case - when QA test cases need to be updated based on misunderstanding of requirements, design, or changes to either
  • Functional - the functionality developed doesn't work as expected
  • Data - migrated or created data has errors, fields missing, or the wrong set was extracted from an existing system
  • Training - when defects are logged because a user or tester did not understand how the system should work
  • Change requests - every time a new enhancement is requested it is logged
  • Development ignored requirements - this is an interesting category that came out of the fact that some developers did not read the requirements when they developed.
  • Test data - we had a lot of errors in test data that was created that caused functionality to look like it didn't behave

A few interesting observations about the chart:
  1. You will never close all your issues so at deployment time, so it makes sense that there are still open issues (yellow).
  2. Good news is that there are fewer relative requirements questions, test case issues, and test data issues at deployment (yellow).
  3. Also good news in the closed issues (green), in that very few change requests were fixed and closed at deployment, relative to functional, data, and configuration issues.
  4. While a number of user experience issues are open at deployment (yellow), a large number were fixed as well (green). It is good to see this - it's an indication (hopefully) that only the most critical ones were closed.
  5. Of the overall issues (blue), the high level of requirements questions is because we started tracking them early in the project.
  6. The number of overall of test case issues is actually concerning (blue). That's a sign they were poorly written from the beginning or that the system changed significantly after they were developed - and notice that the change requests that were closed were low however here (green).
  7. Also in the overall issues, the number of change requests is extremely high if you look at it relative to functional issues (blue).
  8. In this project, the number of user experience improvements is high, but also not surprising, since we didn't put a lot of the UI design in place until late in the project (blue).
  9. Finally, the number of data defects open at deployment is probably too high (yellow) while it's not a huge percentage of the total defects (blue). The issue on that project was they didn't do the data-code merge until very late in the project so the issues spiked just before deployment.

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Friday, March 26, 2010

Software Requirements Traceability – How Low Do We Go?

Tracing software requirements throughout the project is considered a good practice in software engineering. There are numerous studies that have shown the benefit of traceability and how traceability can avoid defects in software projects.

The generally accepted practice of tracing software requirements includes elements such as architecture and other design components, source code modules, tests, help files and documentation. But how low should traceability go?

A project that I was on recently, the testing organization asked for traceability of the software requirements. The functional requirements are traced to the business requirements. The business rules and the UI data elements are traced to the functional requirements. As the project progressed, additional mappings to design, documentation and test cases will be added. All of the tracing is done in MS Excel, which is also problematic, but that is a different issue for a different post.

While I have traced software requirements before, I have never traced down to the UI data element level. Is there value in tracing down to this level? When I asked the testing organization why they desired this level of tracing, their answer was the standard, “to make sure that we have test cases that cover everything”. But is there really value in tracing down to this level?

I did some research , and could not find any direct references to tracing down to this level. My research did show that some military contracts and defense contracts may call for tracing to this level. But for projects outside of the military or defense world, it is rare.

Tracing to this level is expensive, no doubt about that, especially if the tracing is done properly. For example, a UI data element could be valid for a number of software requirements. Is it enough to trace it to the first software requirement where it is valid? Or is it more proper to trace it to every software requirement where it is valid? Given a basic UI data element such as customer name, it could trace to quite a few software requirements in a large system.

What if the software requirements change? Maintaining a valid traceability can be challenging, especially if the tracing is done manually (which, in this case it is). Every time a software requirement is changed, or a UI data element is added, deleted, renamed, the traceability matrix should be updated. The larger the system, the more complex and cumbersome this becomes.

So while I can applaud the desire of the test organization to ensure that every UI data element within the system is tested and valid per a software requirement, is it worth the cost?

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Wednesday, March 24, 2010

How to Shoot Yourself in the Foot

7 ways to do software requirements poorly to set your project up for failure and what to do instead.

Why spend precious project cycle time on software requirements rather than just getting down to business on design and implementation? Back in the 1970’s Barry Boehm published extensive data from large projects showing that an error created early in the project, for example during requirements specification, costs 50 to 1,000 times as much to correct late in the project as it does to correct (or avoid) close to the point where it was originally created. His landmark study has been confirmed over and over again on countless projects in the decades since.


Why are errors so much more costly to correct downstream? There are two main reasons: project artifacts to be corrected increase over time and more people need to get involved in defect correction as the life cycle progresses. A poor requirement or an unneeded requirement will become the ‘tip of the iceberg’ for problems later in the project. One software requirement can easily turn into several design diagrams. One design diagram can turn into hundreds of lines of source code, scores of test cases, many pages of end-user documentation, help screens, instructions for technical support personnel, marketing collateral, etc. It is obvious that correcting a mistake at requirements time will be much less costly than correcting all the manifestations of the requirements problem downstream. Even more expensive is developing all of the downstream artifacts for an unneeded requirement and then debugging problems with it before realizing it had little or no business value for the release.


A requirement defect found and fixed during the requirements specification involves those eliciting and designing the requirements and the customer contacts. A requirement defect found and fixed after the product has shipped will involve many more groups and people such as call center, customer support, field support, testing, development, design and requirements. Just think of all of the forms to fill out, emails, phone messages and meetings that could be involved with a problem at the end of the life cycle as compared to the beginning of it.


You might say, “I am in an Agile shop and we send a release every 4 weeks to our customers so we can’t be hurt as much as on a waterfall project.” Well, Agile project management approaches can certainly help limit the damage by short duration time-boxing, but even there the cost is greater at the end of an iteration than at the beginning. Blowing 2 weeks of staff time on an Agile sprint for a requirements bug that could have been corrected or avoided on day 2 is still expensive and to be avoided.

Over my next few blogs I will go into the following 7 ways to do software requirements poorly to set your project up for failure and what you could do instead


  1. Short-change the time on requirements or don’t do them at all
  2. Don’t listen to your customer’s needs
  3. Don’t use models
  4. Use weasel words
  5. Don’t do requirements traceability
  6. Don’t prioritize requirements
  7. Don’t baseline and change control requirements
Don’t shoot yourself in the foot on your next project!


OR?













Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Monday, March 22, 2010

So you want to get a job as a Business Analyst? Here are the things we look for.

We’ve written a few blog posts about how to be a sucessful BA.

But people often ask; what do we look for in a BA?

One of our biggest challenges is finding good Business Analysts and Product Managers. We interview a wide variety of individuals but it’s really tough to find good hires. Since we specialize in such a detailed field we really have to look for specific skills. Having relevant experience and having consulting experience does not suffice.

Our interview processes is also a bit more unusual than most companies. We start off by having the candidate fill out a questionnaire, we conduct a phone interview. Then we have two onsite interviews that are more specific to job required skills for the business analyst and product management job.

This process is more thorough than most; however we feel we can get a better feel for the candidates’ background and culture fit by doing these series of evaluations.

During this whole process we look for certain skills at each step. If a candidate does poorly at any of these steps, we won’t move him/her on though the remainder of the process.

All throughout the interview process we ask ourselves:

Do they have strong analytical skills?
Have they written software requirements documents before?
Are they detail oriented?
Are they a good cultural fit?
Do they have any consulting experience?
Are they coachable/ trainable?
Are they personable?
Are they professional?
Would I like to work with this person?

Answering ‘yes’ to all of these is a rare find.
Most important to us: we have to have BA’s and Product Managers who possess strong analytical skills, are very detail oriented, solid technical skills and are someone we would like to work with.

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Friday, March 19, 2010

Scope Management: The Pitfalls of Steering vs. Managing Scope for Software Requirements

When defining software requirements we often have to walk the tight rope of balancing the desires of multiple business stakeholders with the budget and resource limitations of IT and support. The person managing the requirements often becomes the mediator, taking in input from all sides and using that information to guide the team to a consensus on scope.
However, just like an objective reporter shouldn’t only ask leading questions, you shouldn’t be steering the group into bad scope decisions based on some flimsy pre-conceived notions.


Here are some classic pitfalls of scope steering:


  • Playing into Your Own Preconceptions – You may come into a project already having an idea of how you think the product should work or what’s important and what’s not. Perhaps you have a lot of knowledge of a pre-existing system or have worked on projects like this before. The more information you have coming into a project can really help, as long as you don’t allow it to close your mind to what stakeholders and Subject Matter Experts (SME’s) are defining as the current needs.


  • The Squeaky Wheel Stakeholder – Personalities always come into play in project management and sometimes the loudest person gets the most attention. As the person facilitating requirements gathering, it’s up to you to make sure that requirements from other stakeholders and SME’s are not lost in the process.


  • Defining the Solution Rather than the Need – A classic mistake of any software development project is allowing stakeholders and SME’s to define what they think the system should be, before defining the problem they are trying to solve. Any time someone is defining what the system should do, be sure that you also understand why.


  • IT Says No – Okay, so maybe they don’t outright say “No!”, but there are times when IT resources will start rejecting ideas early-on because they are considered too big or too difficult. It is important to get sizing estimates from IT throughout a project; however, don’t abandon a feature right off the bat before understanding the requirements. Work with IT to help them understand the needs of the business and provide sizing estimates.

Effective ways to manage scope throughout requirements gathering:




  • Define and Re-iterate Objectives – Working with stakeholders to define business objectives is a critical step early on in the project. As the requirements gathering process continues, each new requirement should clearly support a business objective. You should always have a clear vision in your mind of how each requirement relates to the objectives. If not, ask.



  • Define the Business Value for Each Feature – This can be the most challenging aspect of any software requirements project, but it should be done to ensure that prioritization of features is base on value rather than gut decisions. How many customers will this bring to the site? How much will revenue increase as a result of this feature? How much will operational expenses be reduced? Background information such as company metrics and industry standards will help to assess the value.



  • Inform Stakeholders of Impacts – As scope discussions are taking place let stakeholders know what the impacts are. Come prepared with IT sizing information so they understand the relative cost of each requirement. Identify dependencies between requirements so that the impacts of changes are understood, as well as any tradeoffs of features.

Managing scope will be a challenge on every project, but you can facilitate agreement among stakeholders by keeping objectives in mind and avoiding the pitfalls.

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Wednesday, March 17, 2010

BRD vs. Functional Software Requirements

Business requirements vs Functional requirements

We often get the question asking “what is the difference between business requirements and functional requirements?” In prior posts I have discussed that these distinctions are actually artificial and are artifacts of an organization structure that requires people in “the business” to produce a “business requirements document (BRD)” vs. some other group that will produce a “system requirements specification (SRS)”. The difference ultimately is just the level of detail. But given that we do have to live with BRD’s and SRS’s, how do we decide what goes into each document?

As you may know, at Seilevel we use a model called the Requirements Object Model (ROM) which describes a hierarchy of business needs and features. At the top of the hierarchy we expect a statement of Problems, Objectives and Strategies that tie directly in to corporate strategies. At the bottom of the business level, we expect specific Problems, Objectives and Strategies that drive a particular program.




To determine what goes into the BRD and what goes into an SRS, the best place to start is by thinking in terms of who uses the BRD vs. the SRS and what decisions do they need to make from it. If you think of a typical process model, executives must be presented with information following each phase to determine if the project should continue. Based on the information gathered in each phase, the team presents a summary to the executive who decides whether to proceed. This model can easily be incorporated into an agile or iterative approach which I will cover in another blog post.





Select – Executives are presented with a number of possible projects with conceptual business cases. Based on the corporate strategy and the value of the businesses cases, they select which projects get tentative funding based on a very high level view of the product concept.


Envision – The team gathers more detail around the problems the business is experiencing, the features to solve them and the final business case. The development team uses this information to construct a high level estimate of the total cost. Based on the cost and the return specified in the business case, executives decide to go ahead to the next phase.


Plan – The team creates a detailed set of specifications and design along with a release plan for what will be released when. Based on this, the executives determine, based on the timing of achieving the value, whether the project should proceed.


Build – The team begins building. Exit criteria are based on the level of quality and the value of the features actually implemented. Executives review the business case in the context of the features actually implemented and decide to proceed based on this information.


Deploy – The project can only be declared a success once metrics have been collected to determine if the business value was actually achieved and the business case met.




In this model, the purpose of the BRD is to provide executive stakeholders with enough information to determine if the project has a sound business case. This means that finance needs to understand the features well enough to assign a dollar value on the return and IT needs to understand the features to create a design concept that allows initial sizing of the project.

The purpose of the SRS is for IT to perform an accurate sizing and create a delivery schedule. Based on the delivery schedule, the business case can be reevaluated to determine if the project should go forward.

The BRD should focus on linking features to the business case to enable executives to determine whether the business value warrants more study and to set overarching priorities of features. The SRS should focus on linking features to the design to so that executives can determine if the value can be achieved in a timely fashion and at an acceptable cost.

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Monday, March 15, 2010

3 Basic Tips to Data Migration Requirements for your Software Project

3 Basic Tips to Data Migration Requirements for your Software Project

It's late stage of a project, so it's time to start worrying about your data migration requirements if you haven't already! If you have done it, good for you! You can just stop reading. But so many projects push this to the end and then panic when it doesn't go well. I have seen project after project have late deployments because the data was not properly migrated and tested.

Here are a few tips to consider:

1. User your data models to identify what data should be migrated. For example, you can use the boxes in Business Data Diagrams (BDDs) and you can use the data stores in Data Flow Diagrams (DFDs).

2. You can actually do a two-way check here - if you get data dumps and have fields that are not in your data models, then you may be missing a requirement.

3. Plan for a lot of time to test this. Surely someone will test the migration scripts themselves and someone is likely testing your code, but you also want to plan for testing the integration of the migrated data to the code. So often properties are not setup correctly and so the data doesn't show up in your software, even though it's in the database behind the scenes.

Labels: , ,

Requirements Defined Newsletter Bookmark and Share

User Adoption Metrics that Matter

If a system is deployed and no one uses it, does it exist? How can you recognize the full business value of a system if no one is actually using it correctly? User adoption is, at least indirectly, a critical project success metric. I say "indirectly" because while project success (e.g., increase sales by 10%) can be obtained without strong user adoption, user adoption of the system is almost always a necessary condition for project success. But how can you tell if your user group is actually using the system? More importantly, how can you tell whether they are using the system as intended in order to meet the project's business objectives? User adoption metrics should not simply measure who is logging into the new system and how many hours a day they are using it. Such statistics are ultimately meaningless if the system's use does not fulfill some business need that justified the project's existence. After all, the user group may be incentivized in some way to "use" the system, when in fact, they are logging in and blindly clicking through the system without really fulfilling a business need. Thus, you must look not only at the system, but at factors outside the system to get a true feel for user adoption:

1. Find out what the legacy process is. Most IT projects involve designing systems which replace or augment some legacy process. Hence, it is important to show that not only is the new system being used, but that the old process is phased out. In general, if the legacy process that the system solution was designed to replace is still in use, then the project is a failure. What business owner would want to pay to maintain a legacy process, and also incur extra costs to support a solution that was supposed to replace or enhance the legacy process?

2. Look at KPIs. Most business teams report some Key Performance Indicators back to their management. There should be KPIs for both the legacy process and new system process, as well as general KPIs which measure performance of the overall business process (e.g., time to process an order, number of order errors, etc.). KPIs related to the legacy process should show evidence of that process being phased out. For example, if a KPI measures the number of orders processed using the legacy system, this number should be trending downward as number of orders processed using the new system trends upward. If the measure is in time to process an order, then the number of orders used to calculate this metric using the old system should decrease, while the number used to calculate this metric in the new system should increase. The key is that there should be some quantifiable, measurable difference between use of a legacy versus new process. Measuring the use of the new system alone is not sufficient for determining user adoption.

3. Measure project success. A fundamental assumption of every project is (or should be) that the project will result in the satisfaction of some business objective. If the project does not meet the business objective, then it doesn't really matter whether anyone is using the new system. As an example, if the implementation of a project was supposed to result in 10% cost reduction to process an order, and the cost actually increases, then something is wrong somewhere. Either the system is not being used properly or at all, the old system is not being phased out, or some requirement critical to project success was not implemented or captured. Measuring how many people are logging into the system or the number of licenses purchased will not determine whether the system is being used to its full potential.

Finally, when measuring user adoption, it is important to be patient. Some legacy systems and business processes (as well as the people executing or managing them) have been in place for years, sometimes decades. Recognize that at product launch, user adoption may be low, and the project will not be successful right away. In the early stages of product launch, the project may actually look like a total failure. Don't give up! Showing the business value of a project takes time. With the right amount of training, communication, and change management, you should begin to see the project trend towards success.





Labels: , , , ,

Requirements Defined Newsletter Bookmark and Share

Wednesday, March 10, 2010

Getting Ramped Up

Luckily I have been working fairly consistently with the same client lately. This means that I haven’t had to constantly jot down notes during meetings on things to look up afterward, find corporate maps so I don’t get lost in symmetrical buildings with no windows, or remember not to call the client by the previous engagement’s name.


I was asked recently how I plan on dealing with being dropped into a new environment with a different client. Well, if there were no deadlines and nothing to get done, I would just read documentation all day. Even that would only go so far though. I have no expectations that existing documentation is comprehensive or understandable. It will most likely be the case that you will be creating documentation that others will use when trying to grasp the environment.


So in this world of yesterday deadlines we have to learn things on the go. This means that when I land, I have to get my hands on a systems context diagram so that I can understand what talks to what. From there I can ask who the SMEs are for the different systems and piece together what the systems are for and what they actually do in implementation. This should be the biggest chunk of ramp time, figuring out what systems do so you can understand why you are doing something to change them.


The other obstacles to starting with a new client are regulations and business practices. You will never be able to know all of them. Because it takes multiple subject matter experts to own these, your best course of action is to familiarize yourself enough so that you can flag potential areas for the SMEs to fill in. As much as possible, try to incorporate these rules into your diagrams.


The key is that you never want to be reading up on things for the sake of reading up on them. You always want to be actively producing a deliverable. So reading the interface documentation for the sales system should be helping you gather the first set of gap requirements.

Labels: ,

Requirements Defined Newsletter Bookmark and Share

Monday, March 08, 2010

Use Full Time Requirements Engineers to Manage Change Requests

Change Requests or CR from Business Users after an application has been deployed are an integral part of the Development Process. No matter how much care and thought went into the creation of the requirements on which the application was built, it is impossible to get everything right the first time. This is particularly true of large and complex enterprise applications.

In most organizations, CR and Defects are tracked together after initial deployment of the application. There are benefits to this approach, primary among them being the ability to track these requests together and group them into manageable buckets that Development can deliver on at periodic intervals. It is also a practical solution to the reality that in a lot of cases, the lines that divide Defects from CR are often blurry. So, instead of managing entirely different processes for Defects and CR, most organizations have taken the practical step of merging both into one process.

While this is very advantageous for management and tracking of both defects and CR, improperly documented CR actually end up creating a lot of churn and the productivity gains of merging both processes into one is quickly lost. The main reason for this is that CR are documented and tracked in exactly the same manner as Defects using the same tool. Documenting defects is relatively straightforward and business users are able to effectively identify the improper behavior and communicate what the correct behavior of the application should be. The tools used to track Defects also do a good job of enabling these simple communications between the users and developers, in addition to tracking status and other functionality needed to manage this process.

However, documenting CR in the same manner as Defects leads to very suboptimal results. For starters, CR are really requirements that were either missed or improperly communicated during the initial requirements gathering stage. Unlike defects, there is no simple baseline of functionality that is expected behavior versus actually delivered software to call out. The complexity of CR also varies significantly from extremely minor changes (typically at the UI level) to full blown extensions to functionality. Having the testers or end users document these changes results in requirements that are partial, unclear or flat out wrong. This is not a criticism of users but rather an expected outcome. This is not their core competence. They are experts in Finance, Marketing, Sales, Accounting and a myriad other skills that are found in modern Corporations. They are not experts in creating Software Requirements.

The net result of this is that CR often go through multiple iterations of Development as the Developers and Business Users attempt to determine what is exactly needed. This leads to increase in delivery times, frustration on both sides and significant loss in productivity across the entire team. There is also a very real danger of scope creep as CR are used to expand the scope of the project incrementally over time leading to significant cost overruns.

The simple solution to this problem is to devote full time Requirements Engineers to documenting and managing CR. The number of Requirements Engineers needed to support this effort will vary depending on the complexity of the project. But in my experience, one full time Engineer is more than sufficient to support even a large and complex development effort. The cost argument is made that devoting a full time Requirements Engineer AFTER development is complete is not justified. But when one considers the lost productivity, frustration and real expenses incurred with a never ending stream of CR, the cost of the Requirements Engineer turns out to be a bargain. Paybacks of 5 to 10 times the cost of the Engineer can easily be reaped on most projects.

Requirements Defined Newsletter Bookmark and Share

Friday, March 05, 2010

The Other 8 Groups in the Software Requirements Audience

Do you know who your audience is for your software requirements? Why, it’s the development organization, of course! And while that is true, development is the main audience for software requirements, there are many other audience members as well.

1. Testing Team. I am sure that most of you also thought of the testing team as another audience for software requirements. The software requirements give the test team a baseline from which to write their test plan, test cases and test scripts. The test team looks not only to see if the software works, but does it work the way the user wants it to work, according to the software requirements. You may also find that your test team is an excellent group to review your software requirements for clarity. They tend to look at software requirements from a perspective of “how would I test that requirement?” If they cannot think of a way to test the software requirement, the software requirement may not be clear and/or concise.

2. End Users. What about the end user? Those who are communicating what their needs are, of which the software requirements are intended to capture? They are also an audience for your software requirements. We need their reviews to make sure we have documented their needs clearly and accurately.

3. Technical Writing. Technical writing is another group that relies on software requirements for their work. We all hope that the Help content of the product reflects what the software does, the best place for the technical writing team to get that information starts with the software requirements. Of course they will also confirm their content with the software itself, but they can certainly get started with the software requirements.

4. Translation Teams. Translation teams also rely on software requirements to help them understand what the product is supposed to do, so they can translate not only the software but also the technical documentation appropriately for their language and culture.

5. Training. Organizations that develop training courses for the software also rely on the software requirements. Like many groups, the software requirements help the training department plan and structure their curriculum, so they can develop and deliver courses quickly and that are ready for training the end users when the software is implemented.

6. Support Organizations. Support organizations, both internal help desks and product support groups that take the front line calls from customers also use the software requirements to help trouble shoot issues reported. By looking at the software requirements, Support can help determine if the issue reported is a flaw in the software, or if it is working as intended.

7. IT. Internal IT groups are also interested in software requirements. Beyond the technical requirements for hardware, software, etc that will be required to support the software, IT groups also want to understand any interfaces to other existing software that must be built. They need to understand the support requirements of the software, what sort of administrative tasks will be required of them.

8. Consulting. If you are working in a commercial software environment, your consulting organization would also be an audience for your software requirements. They need to understand the new software that is soon to be released, so they can plan implementation or migration services for your customers.

There are lots of audiences, make sure you are including them all when you design the format/layout and include them in all the appropriate review meetings.

Labels: , , , ,

Requirements Defined Newsletter Bookmark and Share

Wednesday, March 03, 2010

An Actual Case of Software Requirements Re-use

We have been using Caliber for a few years and we recently decided to research whether there was a requirements management tool that more cleanly supported working offline. I actually wrote a bit about our vendor selection process and our search for a requirements tool awhile back, so if you read it, you’ll know we did this project working from software requirements written at the level necessary to do a vendor selection. Well, here we are again a few years later, with a pretty important feature request that we didn’t recognize the priority of when we did the first analysis. The issue is that we often work on cellular cards, which with Caliber has doesn’t work well – the connection is too slow. So we really need to be able to just work offline and sync. We have a few anecdotes about other vendor tools in the market and how they handle offline support. We even have thoughts of building some offline tool ourselves to sync with Caliber or another backend. In the end, we are trying to follow our vendor selection process. We have done a bit of research on the other tools we know might work – including reading feedback and doing a mini demo ourselves. And at the same time, we have our existing software requirements from the first time we selected a tool that we are reviewing.




Our intent is to review any tools we consider against those original requirements. Ideally we really need reprioritize those as our needs may have changed, but for the most part we can select a new tool with MUCH less work than the first time. My estimate is it’ll take a tenth of the time this round through.




The only downsides to this, our old requirements aren’t in a tool (since we didn’t have a tool when we did the first analysis) and I don’t know how easily we can get them in one. But the bigger issue is there is no organization to them at all – they are about 150 features, not organized by any other useful models (and that’s because we didn’t use models as consistently then!). So, while we do have our software requirements to reuse, we probably could do a pass on improving them too. But even at that, it’s still so much less time committed.

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Monday, March 01, 2010

Spread the Word: Stop Password Masking

One of my pet peeves is password masking. I find nothing more frustrating than having a log-in rejected, and not being able to verify that it is correct because I can’t see the password. It was quite nice to find out I’m not alone. Jakob Nielsen’s article Stop Password Masking makes a very good case for stopping the practice. It also addresses the exceptions.

Thank you Mr. Nielsen!

Labels: ,

Requirements Defined Newsletter Bookmark and Share