Seilevel
Seilevel Home
Back to Blog Home - Requirements Defined

Wednesday, March 31, 2010

Software Requirements Models that Work Well Together – Decision Trees and Process Flow Diagrams

As Requirements Analysts we use Models to paint a picture of the Application we are defining for the developers and eventually to derive the final requirements they will build to. There are many Models that we use to define different aspects of the system – the People who influence the development of the application and eventually use it, the Data that is consumed, transformed and created by the application and the different Hardware and Software Systems that will be used to develop the application.

Some of these Model pairs like Org Chart and Data Flow Diagrams, while useful in their own right are not necessarily complementary. The Org Chart is useful for identifying the Actors and Influencers of the Application. The Data Flow Diagram depicts the flow and transformation of data as it moves from one process to another in the Application. While both these Models are useful and possibly required for your analysis, they are not complimentary. They describe different portions of the Application and using them together does not necessarily enhance your understanding of one or the other.

Other Model pairs work very well together in enhancing one’s understanding of the Application. One Model Pair that I often use together are Process Flows and Decision Trees. Decision Trees work very well in conjunction with Process Flows for the following reasons.

1. They help to simplify Process Flow diagrams by removing complex decision logic from the main flow.

2. Help keep the reader’s focus on the overall flow without getting distracted in the details of one portion of the flow.

3. Make the process flows easier to create and more “legible”. Flows with many decisions and branches are time consuming to create and harder to understand.

A common usage scenario of pairing Decision Trees and Process Flows is when the flows have validation steps. Validation steps are very common is transactional process flows like order creation, quote creation, product configuration and so on. The validation step itself is likely to be complex and will typically require the application to check if all the required data is provided and properly formed. For example, an order entry validation step will check for proper Customer Information, Address, Payment Information and so on. It is certainly not incorrect to put the validation steps into the Process Flow. Doing so can make the flow very busy and messy at this step in the process. I have found that readers do one of two things when confronted with this in a Process Flow and both are bad. They either focus on the “complicated part” and skim through the rest of the flow. Or they do the exact opposite. The net result is that the overall review suffers.

These days, I prefer to keep the branching in my Process Flows to a minimum and use Decision Trees in conjunction with them. As a general rule of thumb, if the Process Flow I am working on is reasonably complex, I try to restrict myself to just one Decision Box in a sequence. In the example above on Order Validation, I would have had just one Decision Box at the validation step titled “Order Valid” and moved forward from there. The different validation steps within the “Order Valid” decision, I would have depicted in a Decision Tree.

The benefits I have found from doing this are as below.

1. The Process Flows are easier to create.

2. The Decision Trees enable me to depict complex logic easily.

3. Splitting out these two aspects enables me to get both right.

4. Lastly, deriving requirements is easier.

Try it and let me know what you think.

Labels: , , , ,

Requirements Defined Newsletter Bookmark and Share

Monday, March 29, 2010

Software Project Issue Metrics

I love numbers, and so I'm always interested in what metrics we can pull from projects. Sometimes I pull them just because I'm curious, sometimes they actually help support opinions and gut feels, and most often they are useful to make decisions on your project.

On a recent project, I pulled metrics from our issue tracking system to look at the types of issues that were logged. I will first mention that we actually start tracking issues much earlier than most projects - we do it in the requirements phase.

For context, this snapshot was taken about 1 week before the initial deployment of the project.

The included:
  • Requirements questions - for any open questions about the requirements
  • Configuration - these are most relevant for out-of-the-box solutions that are heavily configured instead of developed
  • Test case - when QA test cases need to be updated based on misunderstanding of requirements, design, or changes to either
  • Functional - the functionality developed doesn't work as expected
  • Data - migrated or created data has errors, fields missing, or the wrong set was extracted from an existing system
  • Training - when defects are logged because a user or tester did not understand how the system should work
  • Change requests - every time a new enhancement is requested it is logged
  • Development ignored requirements - this is an interesting category that came out of the fact that some developers did not read the requirements when they developed.
  • Test data - we had a lot of errors in test data that was created that caused functionality to look like it didn't behave

A few interesting observations about the chart:
  1. You will never close all your issues so at deployment time, so it makes sense that there are still open issues (yellow).
  2. Good news is that there are fewer relative requirements questions, test case issues, and test data issues at deployment (yellow).
  3. Also good news in the closed issues (green), in that very few change requests were fixed and closed at deployment, relative to functional, data, and configuration issues.
  4. While a number of user experience issues are open at deployment (yellow), a large number were fixed as well (green). It is good to see this - it's an indication (hopefully) that only the most critical ones were closed.
  5. Of the overall issues (blue), the high level of requirements questions is because we started tracking them early in the project.
  6. The number of overall of test case issues is actually concerning (blue). That's a sign they were poorly written from the beginning or that the system changed significantly after they were developed - and notice that the change requests that were closed were low however here (green).
  7. Also in the overall issues, the number of change requests is extremely high if you look at it relative to functional issues (blue).
  8. In this project, the number of user experience improvements is high, but also not surprising, since we didn't put a lot of the UI design in place until late in the project (blue).
  9. Finally, the number of data defects open at deployment is probably too high (yellow) while it's not a huge percentage of the total defects (blue). The issue on that project was they didn't do the data-code merge until very late in the project so the issues spiked just before deployment.

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Friday, March 26, 2010

Software Requirements Traceability – How Low Do We Go?

Tracing software requirements throughout the project is considered a good practice in software engineering. There are numerous studies that have shown the benefit of traceability and how traceability can avoid defects in software projects.

The generally accepted practice of tracing software requirements includes elements such as architecture and other design components, source code modules, tests, help files and documentation. But how low should traceability go?

A project that I was on recently, the testing organization asked for traceability of the software requirements. The functional requirements are traced to the business requirements. The business rules and the UI data elements are traced to the functional requirements. As the project progressed, additional mappings to design, documentation and test cases will be added. All of the tracing is done in MS Excel, which is also problematic, but that is a different issue for a different post.

While I have traced software requirements before, I have never traced down to the UI data element level. Is there value in tracing down to this level? When I asked the testing organization why they desired this level of tracing, their answer was the standard, “to make sure that we have test cases that cover everything”. But is there really value in tracing down to this level?

I did some research , and could not find any direct references to tracing down to this level. My research did show that some military contracts and defense contracts may call for tracing to this level. But for projects outside of the military or defense world, it is rare.

Tracing to this level is expensive, no doubt about that, especially if the tracing is done properly. For example, a UI data element could be valid for a number of software requirements. Is it enough to trace it to the first software requirement where it is valid? Or is it more proper to trace it to every software requirement where it is valid? Given a basic UI data element such as customer name, it could trace to quite a few software requirements in a large system.

What if the software requirements change? Maintaining a valid traceability can be challenging, especially if the tracing is done manually (which, in this case it is). Every time a software requirement is changed, or a UI data element is added, deleted, renamed, the traceability matrix should be updated. The larger the system, the more complex and cumbersome this becomes.

So while I can applaud the desire of the test organization to ensure that every UI data element within the system is tested and valid per a software requirement, is it worth the cost?

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Wednesday, March 24, 2010

How to Shoot Yourself in the Foot

7 ways to do software requirements poorly to set your project up for failure and what to do instead.

Why spend precious project cycle time on software requirements rather than just getting down to business on design and implementation? Back in the 1970’s Barry Boehm published extensive data from large projects showing that an error created early in the project, for example during requirements specification, costs 50 to 1,000 times as much to correct late in the project as it does to correct (or avoid) close to the point where it was originally created. His landmark study has been confirmed over and over again on countless projects in the decades since.


Why are errors so much more costly to correct downstream? There are two main reasons: project artifacts to be corrected increase over time and more people need to get involved in defect correction as the life cycle progresses. A poor requirement or an unneeded requirement will become the ‘tip of the iceberg’ for problems later in the project. One software requirement can easily turn into several design diagrams. One design diagram can turn into hundreds of lines of source code, scores of test cases, many pages of end-user documentation, help screens, instructions for technical support personnel, marketing collateral, etc. It is obvious that correcting a mistake at requirements time will be much less costly than correcting all the manifestations of the requirements problem downstream. Even more expensive is developing all of the downstream artifacts for an unneeded requirement and then debugging problems with it before realizing it had little or no business value for the release.


A requirement defect found and fixed during the requirements specification involves those eliciting and designing the requirements and the customer contacts. A requirement defect found and fixed after the product has shipped will involve many more groups and people such as call center, customer support, field support, testing, development, design and requirements. Just think of all of the forms to fill out, emails, phone messages and meetings that could be involved with a problem at the end of the life cycle as compared to the beginning of it.


You might say, “I am in an Agile shop and we send a release every 4 weeks to our customers so we can’t be hurt as much as on a waterfall project.” Well, Agile project management approaches can certainly help limit the damage by short duration time-boxing, but even there the cost is greater at the end of an iteration than at the beginning. Blowing 2 weeks of staff time on an Agile sprint for a requirements bug that could have been corrected or avoided on day 2 is still expensive and to be avoided.

Over my next few blogs I will go into the following 7 ways to do software requirements poorly to set your project up for failure and what you could do instead


  1. Short-change the time on requirements or don’t do them at all
  2. Don’t listen to your customer’s needs
  3. Don’t use models
  4. Use weasel words
  5. Don’t do requirements traceability
  6. Don’t prioritize requirements
  7. Don’t baseline and change control requirements
Don’t shoot yourself in the foot on your next project!


OR?













Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Friday, March 19, 2010

Scope Management: The Pitfalls of Steering vs. Managing Scope for Software Requirements

When defining software requirements we often have to walk the tight rope of balancing the desires of multiple business stakeholders with the budget and resource limitations of IT and support. The person managing the requirements often becomes the mediator, taking in input from all sides and using that information to guide the team to a consensus on scope.
However, just like an objective reporter shouldn’t only ask leading questions, you shouldn’t be steering the group into bad scope decisions based on some flimsy pre-conceived notions.


Here are some classic pitfalls of scope steering:


  • Playing into Your Own Preconceptions – You may come into a project already having an idea of how you think the product should work or what’s important and what’s not. Perhaps you have a lot of knowledge of a pre-existing system or have worked on projects like this before. The more information you have coming into a project can really help, as long as you don’t allow it to close your mind to what stakeholders and Subject Matter Experts (SME’s) are defining as the current needs.


  • The Squeaky Wheel Stakeholder – Personalities always come into play in project management and sometimes the loudest person gets the most attention. As the person facilitating requirements gathering, it’s up to you to make sure that requirements from other stakeholders and SME’s are not lost in the process.


  • Defining the Solution Rather than the Need – A classic mistake of any software development project is allowing stakeholders and SME’s to define what they think the system should be, before defining the problem they are trying to solve. Any time someone is defining what the system should do, be sure that you also understand why.


  • IT Says No – Okay, so maybe they don’t outright say “No!”, but there are times when IT resources will start rejecting ideas early-on because they are considered too big or too difficult. It is important to get sizing estimates from IT throughout a project; however, don’t abandon a feature right off the bat before understanding the requirements. Work with IT to help them understand the needs of the business and provide sizing estimates.

Effective ways to manage scope throughout requirements gathering:




  • Define and Re-iterate Objectives – Working with stakeholders to define business objectives is a critical step early on in the project. As the requirements gathering process continues, each new requirement should clearly support a business objective. You should always have a clear vision in your mind of how each requirement relates to the objectives. If not, ask.



  • Define the Business Value for Each Feature – This can be the most challenging aspect of any software requirements project, but it should be done to ensure that prioritization of features is base on value rather than gut decisions. How many customers will this bring to the site? How much will revenue increase as a result of this feature? How much will operational expenses be reduced? Background information such as company metrics and industry standards will help to assess the value.



  • Inform Stakeholders of Impacts – As scope discussions are taking place let stakeholders know what the impacts are. Come prepared with IT sizing information so they understand the relative cost of each requirement. Identify dependencies between requirements so that the impacts of changes are understood, as well as any tradeoffs of features.

Managing scope will be a challenge on every project, but you can facilitate agreement among stakeholders by keeping objectives in mind and avoiding the pitfalls.

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Wednesday, March 17, 2010

BRD vs. Functional Software Requirements

Business requirements vs Functional requirements

We often get the question asking “what is the difference between business requirements and functional requirements?” In prior posts I have discussed that these distinctions are actually artificial and are artifacts of an organization structure that requires people in “the business” to produce a “business requirements document (BRD)” vs. some other group that will produce a “system requirements specification (SRS)”. The difference ultimately is just the level of detail. But given that we do have to live with BRD’s and SRS’s, how do we decide what goes into each document?

As you may know, at Seilevel we use a model called the Requirements Object Model (ROM) which describes a hierarchy of business needs and features. At the top of the hierarchy we expect a statement of Problems, Objectives and Strategies that tie directly in to corporate strategies. At the bottom of the business level, we expect specific Problems, Objectives and Strategies that drive a particular program.




To determine what goes into the BRD and what goes into an SRS, the best place to start is by thinking in terms of who uses the BRD vs. the SRS and what decisions do they need to make from it. If you think of a typical process model, executives must be presented with information following each phase to determine if the project should continue. Based on the information gathered in each phase, the team presents a summary to the executive who decides whether to proceed. This model can easily be incorporated into an agile or iterative approach which I will cover in another blog post.





Select – Executives are presented with a number of possible projects with conceptual business cases. Based on the corporate strategy and the value of the businesses cases, they select which projects get tentative funding based on a very high level view of the product concept.


Envision – The team gathers more detail around the problems the business is experiencing, the features to solve them and the final business case. The development team uses this information to construct a high level estimate of the total cost. Based on the cost and the return specified in the business case, executives decide to go ahead to the next phase.


Plan – The team creates a detailed set of specifications and design along with a release plan for what will be released when. Based on this, the executives determine, based on the timing of achieving the value, whether the project should proceed.


Build – The team begins building. Exit criteria are based on the level of quality and the value of the features actually implemented. Executives review the business case in the context of the features actually implemented and decide to proceed based on this information.


Deploy – The project can only be declared a success once metrics have been collected to determine if the business value was actually achieved and the business case met.




In this model, the purpose of the BRD is to provide executive stakeholders with enough information to determine if the project has a sound business case. This means that finance needs to understand the features well enough to assign a dollar value on the return and IT needs to understand the features to create a design concept that allows initial sizing of the project.

The purpose of the SRS is for IT to perform an accurate sizing and create a delivery schedule. Based on the delivery schedule, the business case can be reevaluated to determine if the project should go forward.

The BRD should focus on linking features to the business case to enable executives to determine whether the business value warrants more study and to set overarching priorities of features. The SRS should focus on linking features to the design to so that executives can determine if the value can be achieved in a timely fashion and at an acceptable cost.

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Monday, March 15, 2010

3 Basic Tips to Data Migration Requirements for your Software Project

3 Basic Tips to Data Migration Requirements for your Software Project

It's late stage of a project, so it's time to start worrying about your data migration requirements if you haven't already! If you have done it, good for you! You can just stop reading. But so many projects push this to the end and then panic when it doesn't go well. I have seen project after project have late deployments because the data was not properly migrated and tested.

Here are a few tips to consider:

1. User your data models to identify what data should be migrated. For example, you can use the boxes in Business Data Diagrams (BDDs) and you can use the data stores in Data Flow Diagrams (DFDs).

2. You can actually do a two-way check here - if you get data dumps and have fields that are not in your data models, then you may be missing a requirement.

3. Plan for a lot of time to test this. Surely someone will test the migration scripts themselves and someone is likely testing your code, but you also want to plan for testing the integration of the migrated data to the code. So often properties are not setup correctly and so the data doesn't show up in your software, even though it's in the database behind the scenes.

Labels: , ,

Requirements Defined Newsletter Bookmark and Share

User Adoption Metrics that Matter

If a system is deployed and no one uses it, does it exist? How can you recognize the full business value of a system if no one is actually using it correctly? User adoption is, at least indirectly, a critical project success metric. I say "indirectly" because while project success (e.g., increase sales by 10%) can be obtained without strong user adoption, user adoption of the system is almost always a necessary condition for project success. But how can you tell if your user group is actually using the system? More importantly, how can you tell whether they are using the system as intended in order to meet the project's business objectives? User adoption metrics should not simply measure who is logging into the new system and how many hours a day they are using it. Such statistics are ultimately meaningless if the system's use does not fulfill some business need that justified the project's existence. After all, the user group may be incentivized in some way to "use" the system, when in fact, they are logging in and blindly clicking through the system without really fulfilling a business need. Thus, you must look not only at the system, but at factors outside the system to get a true feel for user adoption:

1. Find out what the legacy process is. Most IT projects involve designing systems which replace or augment some legacy process. Hence, it is important to show that not only is the new system being used, but that the old process is phased out. In general, if the legacy process that the system solution was designed to replace is still in use, then the project is a failure. What business owner would want to pay to maintain a legacy process, and also incur extra costs to support a solution that was supposed to replace or enhance the legacy process?

2. Look at KPIs. Most business teams report some Key Performance Indicators back to their management. There should be KPIs for both the legacy process and new system process, as well as general KPIs which measure performance of the overall business process (e.g., time to process an order, number of order errors, etc.). KPIs related to the legacy process should show evidence of that process being phased out. For example, if a KPI measures the number of orders processed using the legacy system, this number should be trending downward as number of orders processed using the new system trends upward. If the measure is in time to process an order, then the number of orders used to calculate this metric using the old system should decrease, while the number used to calculate this metric in the new system should increase. The key is that there should be some quantifiable, measurable difference between use of a legacy versus new process. Measuring the use of the new system alone is not sufficient for determining user adoption.

3. Measure project success. A fundamental assumption of every project is (or should be) that the project will result in the satisfaction of some business objective. If the project does not meet the business objective, then it doesn't really matter whether anyone is using the new system. As an example, if the implementation of a project was supposed to result in 10% cost reduction to process an order, and the cost actually increases, then something is wrong somewhere. Either the system is not being used properly or at all, the old system is not being phased out, or some requirement critical to project success was not implemented or captured. Measuring how many people are logging into the system or the number of licenses purchased will not determine whether the system is being used to its full potential.

Finally, when measuring user adoption, it is important to be patient. Some legacy systems and business processes (as well as the people executing or managing them) have been in place for years, sometimes decades. Recognize that at product launch, user adoption may be low, and the project will not be successful right away. In the early stages of product launch, the project may actually look like a total failure. Don't give up! Showing the business value of a project takes time. With the right amount of training, communication, and change management, you should begin to see the project trend towards success.





Labels: , , , ,

Requirements Defined Newsletter Bookmark and Share

Wednesday, March 10, 2010

Getting Ramped Up

Luckily I have been working fairly consistently with the same client lately. This means that I haven’t had to constantly jot down notes during meetings on things to look up afterward, find corporate maps so I don’t get lost in symmetrical buildings with no windows, or remember not to call the client by the previous engagement’s name.


I was asked recently how I plan on dealing with being dropped into a new environment with a different client. Well, if there were no deadlines and nothing to get done, I would just read documentation all day. Even that would only go so far though. I have no expectations that existing documentation is comprehensive or understandable. It will most likely be the case that you will be creating documentation that others will use when trying to grasp the environment.


So in this world of yesterday deadlines we have to learn things on the go. This means that when I land, I have to get my hands on a systems context diagram so that I can understand what talks to what. From there I can ask who the SMEs are for the different systems and piece together what the systems are for and what they actually do in implementation. This should be the biggest chunk of ramp time, figuring out what systems do so you can understand why you are doing something to change them.


The other obstacles to starting with a new client are regulations and business practices. You will never be able to know all of them. Because it takes multiple subject matter experts to own these, your best course of action is to familiarize yourself enough so that you can flag potential areas for the SMEs to fill in. As much as possible, try to incorporate these rules into your diagrams.


The key is that you never want to be reading up on things for the sake of reading up on them. You always want to be actively producing a deliverable. So reading the interface documentation for the sales system should be helping you gather the first set of gap requirements.

Labels: ,

Requirements Defined Newsletter Bookmark and Share

Friday, March 05, 2010

The Other 8 Groups in the Software Requirements Audience

Do you know who your audience is for your software requirements? Why, it’s the development organization, of course! And while that is true, development is the main audience for software requirements, there are many other audience members as well.

1. Testing Team. I am sure that most of you also thought of the testing team as another audience for software requirements. The software requirements give the test team a baseline from which to write their test plan, test cases and test scripts. The test team looks not only to see if the software works, but does it work the way the user wants it to work, according to the software requirements. You may also find that your test team is an excellent group to review your software requirements for clarity. They tend to look at software requirements from a perspective of “how would I test that requirement?” If they cannot think of a way to test the software requirement, the software requirement may not be clear and/or concise.

2. End Users. What about the end user? Those who are communicating what their needs are, of which the software requirements are intended to capture? They are also an audience for your software requirements. We need their reviews to make sure we have documented their needs clearly and accurately.

3. Technical Writing. Technical writing is another group that relies on software requirements for their work. We all hope that the Help content of the product reflects what the software does, the best place for the technical writing team to get that information starts with the software requirements. Of course they will also confirm their content with the software itself, but they can certainly get started with the software requirements.

4. Translation Teams. Translation teams also rely on software requirements to help them understand what the product is supposed to do, so they can translate not only the software but also the technical documentation appropriately for their language and culture.

5. Training. Organizations that develop training courses for the software also rely on the software requirements. Like many groups, the software requirements help the training department plan and structure their curriculum, so they can develop and deliver courses quickly and that are ready for training the end users when the software is implemented.

6. Support Organizations. Support organizations, both internal help desks and product support groups that take the front line calls from customers also use the software requirements to help trouble shoot issues reported. By looking at the software requirements, Support can help determine if the issue reported is a flaw in the software, or if it is working as intended.

7. IT. Internal IT groups are also interested in software requirements. Beyond the technical requirements for hardware, software, etc that will be required to support the software, IT groups also want to understand any interfaces to other existing software that must be built. They need to understand the support requirements of the software, what sort of administrative tasks will be required of them.

8. Consulting. If you are working in a commercial software environment, your consulting organization would also be an audience for your software requirements. They need to understand the new software that is soon to be released, so they can plan implementation or migration services for your customers.

There are lots of audiences, make sure you are including them all when you design the format/layout and include them in all the appropriate review meetings.

Labels: , , , ,

Requirements Defined Newsletter Bookmark and Share

Wednesday, March 03, 2010

An Actual Case of Software Requirements Re-use

We have been using Caliber for a few years and we recently decided to research whether there was a requirements management tool that more cleanly supported working offline. I actually wrote a bit about our vendor selection process and our search for a requirements tool awhile back, so if you read it, you’ll know we did this project working from software requirements written at the level necessary to do a vendor selection. Well, here we are again a few years later, with a pretty important feature request that we didn’t recognize the priority of when we did the first analysis. The issue is that we often work on cellular cards, which with Caliber has doesn’t work well – the connection is too slow. So we really need to be able to just work offline and sync. We have a few anecdotes about other vendor tools in the market and how they handle offline support. We even have thoughts of building some offline tool ourselves to sync with Caliber or another backend. In the end, we are trying to follow our vendor selection process. We have done a bit of research on the other tools we know might work – including reading feedback and doing a mini demo ourselves. And at the same time, we have our existing software requirements from the first time we selected a tool that we are reviewing.




Our intent is to review any tools we consider against those original requirements. Ideally we really need reprioritize those as our needs may have changed, but for the most part we can select a new tool with MUCH less work than the first time. My estimate is it’ll take a tenth of the time this round through.




The only downsides to this, our old requirements aren’t in a tool (since we didn’t have a tool when we did the first analysis) and I don’t know how easily we can get them in one. But the bigger issue is there is no organization to them at all – they are about 150 features, not organized by any other useful models (and that’s because we didn’t use models as consistently then!). So, while we do have our software requirements to reuse, we probably could do a pass on improving them too. But even at that, it’s still so much less time committed.

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Monday, February 15, 2010

Software Requirements Specification Template

We’ve just updated our suggested business requirements document template, though it can be used as a template for any type of requirements specification. You can find it here on our Resources page. One thing we strongly suggest is that you create and update requirements within a requirements management tool and then output as needed to a document such as this. We use Borland Caliber and have found that it is relatively easy to export from Caliber into this format. This is a great way to deliver your requirements to the team, but if you can avoid it, do not use the Word document as your source as you will likely step on each other eventually and it makes traceability very hard!

Labels: , ,

Requirements Defined Newsletter Bookmark and Share

Friday, February 12, 2010

6 Basic Techniques for Successful Software Requirements Elicitation with Remote International Customers

In this global environment, there is rarely a project we work on that doesn’t have some set of customer users in a remote location, inevitably overseas. While we are typically eliciting requirements in English, we are always faced with ESL customers. So here are a few tips I shared with my team this week as we prepped for such a session with users in China.
  1. Communicate their value. Help them understand how important they are to the success of your project. After all, you couldn't possibly understand their business needs as well as they do – with differences in business practices, culture, and preferences.
  2. Talk slow. This should be obvious, but it always happens that we talk much too fast, so slow it down. Be very thankful they are willing to talk back to you in English! So whatever you need to do to remind yourself frequently through the discussion to talk slow, do it. Ideas I have include a post-it on the corner of your screen or a co-worker sitting beside you to remind you frequently.
  3. Eyes in the room. Have someone in the room, if at all possible, to facilitate the discussion. If you cannot hear or clearly understand a comment (this often happens with large rooms of people and a speaker phone), that person can sit beside the phone and repeat it. Have a communication tool such as Instant Messenger open on a computer that isn’t projecting, so that person can communicate with you about body language – to tell you if you are going too fast or if people in the room look lost.
  4. Visual is important. I like to prepare powerpoint slides with small bits of information on each slide pertaining to what we are talking about. So perhaps a screen shot or mock-up with a few features list on a slide. Diagrams and visual models are also extremely helpful.
  5. Send materials ahead. As important as visual materials are, they are ten times more useful if you send them ahead. I know I have a better time reading foreign languages than I do hearing them, and this is the case for most. So if you send the materials ahead, they will have a chance to read them and more likely follow along with you during the discussion.
  6. Open questions, not closed. Depending on the culture, some people are hesitant to ask questions or tell you something negative. So be sure to ask open ended questions. Instead of “are there any questions?” you would ask “what questions do you have?” – because we know they have some. Then call on people by name to ask what questions they have. Ask their leader what questions he/she has. If they say they are ok with you are presenting for requirements, then you really should ask “What are you concerned about?” or “What have we not captured that you need to have?” or “What are you not ok with” instead of “Is this ok?”.

Labels: , ,

Requirements Defined Newsletter Bookmark and Share

Wednesday, February 10, 2010

4 Tips to Not Destroy Your Project if you Must Use Spreadsheets for Software Requirements

A year ago, I was working on a project where we created a list of features in Excel to help development understand at a high-level what would be in the new system we were building so that they could provide quick and dirty estimates. Out of these estimates, a solution design for the project was chosen and the team headed down the path of a full blown software development lifecycle on these features. Before too long, we recognized the nightmare that Excel was going to cause us, so we uploaded the features from Excel into Borland’s Caliber requirements management tool. The great thing was we could continue to export to Excel pretty quickly for delivery to our customer, since they didn’t have the tool and they were used to that format.

Now also of note, the team decided to use something a bit like SCRUM for this project, so we just treated the features list as a “backlog” of sorts. As sets of features were prioritized for an upcoming sprint, the business analysis team did its elicitation and documented requirements in the form of models from RML™. Throughout this process, we wrote user stories instead of formal use cases, used other visual models as appropriate, and instead of traditional “shall statements”, we wrote “confirmation statements” to describe the functional requirements and business rules. They were used to describe in detail what the system actually needed to do. Oh and by the way, these were all put in Caliber and exported into Word docs as needed for review and development.

Let’s jump ahead 6 months. Our team actually wasn’t on this project for about 3 months, and when we came back to it, we found what I would label “a requirements mess”. I have analyzed this a million times to understand what happened when we transitioned off for that time period that it went so wrong to land the team in this situation. And while I have ideas, the point of this post really is that it just was a mess.

Unfortunately the team stopped using Caliber completely, so all requirements changes and new additions were done in Excel and Word. As is expected, they had lost chunks of work in overwriting newer versions with older versions by accident. There used to be traceability between features and detailed system requirements, but that was completely lost.

What was most alarming and damaging was that the development team believed that the Excel list (which represents just a list of features with one-line sentence descriptions) was the full set of requirements. They weren’t even looking at the user stories, models, and confirmation statements to develop the system! As one might expect, the system developed didn’t really meet the requirements.

And so we headed down a path of narrowing the gap between what was developed and what was required, as well as completing analysis on areas not yet developed.

Over the next few months it became completely apparent how much people love to use Excel. I mean LOVE. Sure enough, one year later, the Excel file still lives as the source of what the scope is on this project. And while they are now using the detailed requirements documents to develop, the Excel list is still the master of all information. And did I mention, there are now about 60 columns to it? And 3 versions of the document that have different information in different columns? I had the exciting task of trying to merge the 3 back into 1, but while I was doing that, someone was editing one of those versions so now we still have 2 versions. Alas we are trying to get approval to get a few licenses of Caliber in place so we can move this spreadsheet back into a management tool.

But the moral of the story is….well Excel sucks for managing requirements, but if you must absolutely use it:
1. Do more than just use Excel – put your requirements details in other formats as well.
2. Be cautious about what information belongs it and what does not.
3. Be careful what you communicate that it actually represents to the project team.
4. And please use a tool behind that Excel to house your data.

Labels: , ,

Requirements Defined Newsletter Bookmark and Share

Wednesday, January 27, 2010

Managing the Software Requirements Process for “Green Screen” Replacement Applications – Part 1

I have been on teams that created requirements for replacing old “Green Screen” mainframe applications and have also observed the work of my colleagues on similar projects. The observations here are based on our experiences with these efforts and offer suggestions to anyone who is about to embark on a similar project.

This post is split into two parts. Part 1 identifies issues that make these projects challenging and different from other requirements projects. Part 2 offers specific suggestions on how to tackle these projects and describes practices that will result in a greater likelihood of success.

What makes Green Screen replacement projects so challenging?

It is not a lack of proper tools or models that make these requirements difficult to capture and define. The complexities are rooted in the following:
a. Existing organizational processes
b. Resistance to change due to fear and uncertainty
c. Disruption to current operations
d. Potential job losses
e. Lack of clear understanding of the impact on indirect users of the existing system
f. Poor vendor communications to the rank and file users of the new system
g. Lack of proper management communications on the value of the proposed change
The combination of these factors makes the act of eliciting good requirements difficult and extremely challenging. Each of these areas is discussed in greater detail below.


Existing Organizational Processes

Green Screen applications are typically twenty or thirty years old, though it is not unusual to find applications that have been in use for longer periods of time. Over this time, extremely elaborate organizational processes will have been created, evolved, perfected and institutionalized in the company. Changing these longstanding processes to support a new application can be extremely difficult. While it can be argued that any kind of application change is likely to confront process change, four things made Green Screen processes unique.

a. Longstanding
The processes built around Green Screen applications will be as old as the applications themselves. Change is inherently difficult. Changing something that has been around for a quarter century or more is infinitely more difficult.


b. Stable
For the most part, these processes will have undergone little substantive change for many years. Most recent changes would have been tweaking to make something more efficient, plug a few holes, fix weaknesses in the existing workflow or in response to some minor changes in the business environment. These processes are not perfect but they keep the show going day in and day out.


c. Fully Integrated with the Application
The existing processes and the application will have a seamless linkage. No matter how convoluted and byzantine the processes might be, they are totally synchronized with the application. Over time, the organization will have developed detailed processes that reinforce the strengths of the application and compensate for the weaknesses. In a perverse way, changing these processes is tinkering with perfection or something that looks awfully close to it when it comes to marrying an application with supporting processes.


d. Fully Understood

The combination of longevity, stability and application integration make the processes extremely well understood within the company. For the most part, all the different departments and individuals within these departments know what they need to do, when they need to do it and how to do it. Replacing all this with new processes that no one fully understands is an extremely challenging proposition.


Resistance to Change Due To Fear and Uncertainty

This a common characteristic of many projects that seek to introduce a new application or a new way of doing things. In the case of Green Screen applications, this natural resistance is hardened for the following reasons.

a. Fear of Failure on a Massive Scale

Green Screen applications are usually “mission critical.” They typically are extremely important to the day to day running of the business. Any failure during the switchover will have an immediate and tangible financial impact on the business that will be reflected in quarterly earnings. Depending on the reach of the application, the impact could be companywide with potentially disastrous consequences. This reality adds an extra edge to the element of fear associated with the change.


b. Uncertainty Due to a Major Systemic Change

Almost all Green Screen applications involve changes in multiple departments or functional areas of the company. It is not uncommon for the entire operations of a company to be impacted by a Green Screen change. Whenever multiple departments are involved, the element of uncertainty associated with the change gets ratcheted up. These efforts are almost always accompanied by an enormous amount of chatter, rumors and conflicting statements about who is doing or not doing something. Every day will bring a new dynamic because some department or the other is unhappy, not going to support the rollout fully or flat out not going to make a change. This leads to an enormous amount of confusion that brings with it a great amount of uncertainty.


c. Let the Other Guy Go First

In such an environment where fear and uncertainty are pervasive, the smart play is to do nothing and wait for the next guy, department or functional group to make the first move. So, you will meet with a lot of covert and passive resistance to change that needs to be managed.


d. Incomplete Plans

The sheer complexity and scope of Green Screen change efforts makes thorough planning of the changeover practically impossible. It is not possible to have fully fleshed out responses to all the legitimate concerns that users will have around the change. This gives the impression of an effort that is not carefully thought through. It is natural for users to focus on the unknowns and areas of weakness in the planning and extrapolate that to the entire effort. These perceptions feed into the underlying fear and add to the uncertainty barrier to be overcome.


Disruption to Current Operations

Most Green Screen systems are transactional systems on which the daily business of the company is conducted. When the switch over takes place there will be a transition period during which some amount of productivity will be lost as people get used to the new system. This is true even for successful implementations. It takes the users a few weeks to a couple of quarters to truly understand the new features and appreciate the benefits. This perceived (and real) loss of productivity even in a best case scenario causes users to instinctively resist or at the very least postpone the day of reckoning by throwing up roadblocks to the entire process starting with requirements. This resistance to the requirements effort is manifested in the following ways.

a. Why is My Department Being Penalized?
This complaint will almost certainly come from departments that perform data entry. In a vast majority of situations, Green Screen data entry is faster than mouse driven data entry that it is usually replaced with. This will lead to a drop in productivity in the short run for the department performing data entry. But it is not a zero sum game. These increases in time to perform some tasks are almost always offset by significant benefits in other areas of the application like the ability to cross sell or up sell customers. But it usually takes some time for these benefits to be realized. The increases in time and resultant productivity losses however are felt immediately. It is these short term losses that functional area managers will focus on. These arguments will be framed as having their departments victimized so that other departments can do their jobs better.

b. My Numbers Will Look Bad
Organizations that have extensive productivity measurement systems in place that track time very closely will react very strongly to anything that causes a short term hit in productivity or the time it takes to perform certain tasks. This is a very real problem that the requirements engineer will face. There will be outright hostility and lack of cooperation as a result of this that needs to be managed and eventually overcome if any progress is to be made.


Potential Job Losses

The anticipation of the loss of jobs causes both direct and indirect opposition to the requirements efforts of the replacement application. There are two main areas where jobs typically become redundant when Green Screen systems are replaced.

a. Data Entry Operators
Green Screen systems typically have highly specialized data entry personnel. The lack of graphical interfaces results in reliance on arcane Function Key combinations to perform simple and complex data entry tasks. Most companies have teams of employees who have mastered these tasks and are a valuable part of the daily operations. When the new application is introduced many, if not all of these employees will no longer needed by the company and are likely to be let go.
These employees are often treated as expendable by their companies but they are a treasure trove of information for a requirements engineer. They understand the underlying application better than anyone else in the organization. Simply put, they can tell you what the application actually does as opposed to what someone thinks it does or should be doing. This information is critical since Green Screen applications typically have very poor or no documentation of their functionality.
They know who they perform data entry for in addition to the obvious users. These are the hidden users of the system who need to be accounted for and are often overlooked with potentially dangerous consequences down the road.
Last and most importantly, they can tell you “why” some things are done. This is critically important to understand what exactly a system does and needs to be replicated by the replacement. Simply playing with an application and going through it from end to end is not sufficient to understand what the system is doing. This is especially true if you are not familiar with the intricacies of the application.
In a nutshell, these users are very critical to understanding what the existing system does, why it does certain things and who are the users of the system. But faced with the prospect of almost certainly losing their jobs if the new effort is successful, they have no incentive to cooperate. Not having the information in their heads can have very serious consequences for the requirements created to support the changeover.


b. Employees Involved in Manual Processing of Information

A generalization of Green Screen systems is that they are excellent at transaction processing but very poor at data analysis. Over the years organizations have addressed this gap by developing complex data analysis solutions based on third party applictions. But even with these extensive investments, there are always gaps in the data needed to make routine business decisions. These gaps are invariably filled with data extracts to spreadsheets that are analyzed by specialists. The data aggregation into spreadsheets is done by employees who perform time consuming tasks to import data from multiple sources into a single spreadsheet that is then forwarded on the specialist for interpretation and decision making.
This aggregation and analysis of data outside the main system are usually referred to as “manual” processes by most companies. These processes are extremely difficult to understand or infer from a walkthrough of the application. Even assuming that we can guess at a missing process, without an in depth knowledge of all the systems, it is extremely difficult to know where the data to make the analysis comes from, how it is aggregated and cleaned up before being provided for analysis and decision making.
This information is in the heads of users who perform these tasks. Here again, the reality that they could all be out of a job if the replacement effort is successful leads to resistance and lack of cooperation with the requirements effort.


Not Understanding the Impact on All Users

The universe of users impacted by a system change is not limited to users who directly interact with the system. This is true of all systems and a common mistake made in many requirements efforts. However, the sheer scale and scope of most Green Screen replacements compounds this problem.

For example, an application that replaces a Green Screen Order Entry system impacts far more than just the Sales Team who are the direct users of the application. Accounting, Finance, Customer Service, Post Order Processing, Marketing and Product Management will all be impacted either directly or indirectly by the change. Each of these departments interacts with the system in one of four ways:
a. Directly – Customer Service is likely to use the system to enter Orders to process returns and repairs.
b. Data Providers – Product Management will provide data that is used by the Order Entry system.
c. Data Consumers – Post Order Processing will use data created by the Order Entry system for their operations.
d. Data Providers AND Consumers – Accounting, Marketing and Finance will typically provide data and inputs that are used by the system and also consume data generated by the system.

Not knowing who all the impacted users are and how they interface with the system is a critical mistake that can potentially torpedo a requirements effort.


Poor Vendor Communications

While some organizations do develop new applications from the ground up to replace Green Screen applications, most companies use off the shelf software that is customized for the specific needs of their business. I have yet to see one instance where the features and benefits of the new software have been clearly communicated to the user community.

The vendors are very heavily engaged with senior management and a select group of users during the sales process. Once the sale is made, the technical staff come in and start the process of implementing the solution. I have not seen a coherent strategy in place from any vendor to actively evangelize their product and its benefits to users AFTER the sale is made and BEFORE the software goes live. This leaves a huge vacuum of knowledge and expectations in the user community. It is in this vacuum that fear and uncertainty take root and blossom into full blown chaos.

Specifically, these are the key areas of deficiency that lead to problems.

a. The Built in Processes Supported by the Product and Recommended by the Vendor
All vendors of enterprise class software claim to have “best practices and processes” built into their software. But I am yet to see any kind of documentation, presentation, training or coherent communications that explain clearly and simply to users what these processes are and how they should be adopted. If there are such materials and resources available, then they sure fooled me with totally inscrutable presentations that were so stealthy that I missed them altogether!

As requirements professionals we are constantly questioned by users as to what these processes and practices are. The fact that we have to confess total ignorance hurts us in two ways. First, it hurts our credibility with users who are bemused that we do not seem to know anything about the system we are de-facto representatives of. Second, users are reluctant to spend a lot of time defining processes and procedures that may all be overridden by processes that are to be adopted with the new software implementation.


b. The Features and Benefits of the New System

The situation faced with lack of information about the processes to be adopted with the new system are true of the features as well. Most of the time users are given a dog and pony show where someone does a quick fly through of the application. These are at a very high level and of little use to the user community who have much more specific questions in mind that are not answered. I have seen two things happen and both are bad.

First, users assume that the software has certain capabilities and features that it may or may not have. These assumptions are seldom articulated until we are well past requirements gathering and moving towards implementation. This makes for a whole bunch of nasty surprises.
Second, users lose the forest for the trees. I have seen quite a few demonstrations that very quickly get into the weeds on some arcane point a strident user or user group make. Before you know it, the demonstration has degenerated into an agonizing exercise in specificity that only one or a small handful of users and the demonstrator care about. Most importantly, huge chunks of precious demonstration time are lost and the remaining time makes the rest of the demonstration a waste of everyone’s time since they are rushed and general in the extreme.

c. The Inevitability Fallacy
Vendors assume that since the sale is made, it is inevitable that the system will get implemented and adopted by the user community. This belief guides their interactions with the user community. Unfortunately, most vendors do not realize that just because someone signed a contract to purchase the software, there is no guarantee that the rest of the organization will simply go along with the decision. They usually come to this frightening realization many months into the implementation effort when they do not seem to be getting the traction they thought they would be getting. The requirements effort is a good predictor of potential issues down the road. General rule of thumb – if the requirements team is competent and unable to generate good requirements, it usually means that the users are not buying whatever it is the vendors are trying to sell them.


Lack of Proper Management Communications to Users on the Value of the Proposed Change

There is a lack of clear, concise communications from Management and Project Sponsors on what the business value of the proposed change is, how the business value is to be realized as a result of the change, over what time frame the value will be realized, how it will impact the business in the short and long run and who will be impacted by the change.

What I have seen in practice are conflicting statements of value (both real and imaginary), extremely general statements of benefits (we need to come into the 21st century), vague notions around the reason for the change (it is about time we did something different around here), extremely aggressive and unrealistic timelines (we will go live in 3 months where 12 months is more reasonable) and little to no information at all on the things that the users care about the most – how will this affect me, my job, my department and my coworkers.

Presented with this vacuum, users fill in the blanks with their own combination of fear, rumors, disbelief, derision and a combination of outright and latent hostility. For a requirements engineer, this translates into many hours and days of wasted effort in getting people to meetings they do not want to attend, and when they do get there, trying to answer fundamental questions about the project – why, how much, when, so what – BEFORE you can even begin to elicit one useful requirement out of the group. I have often found myself in the uncomfortable position of not knowing the answers to their questions or, worse yet, knowing the answers and wondering if I should be the one giving it to the users.

Great, Now What?
As you read this, you might feel like you are in an echo chamber. Anyone who has practiced our craft long enough is very familiar with everything I have commented on here. It has gotten to the point where we are resigned to dealing with these kinds of situations and just “accepting” it as the reality we are confronted with.

But it need not and should not be this way. Change is inherent in every business endeavor. We just need to do a better job of managing it. There are ways in which we can manage the process of change with common sense, honest and open communication and dignity. This does not mean that the end result will be satisfactory for all or that there will be no pain involved. The people we work with are far smarter than we give them credit for. They may not like change but are more likely to help everyone get there if we all know what we are striving for.

In my next post on this topic, I will discuss specific steps that can be taken to overcome the hurdles identified in this post and answer the question I posited “Great, Now What?”

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Monday, January 25, 2010

A Very Nice Software Requirements Blog – Please Pay Them a Visit

As someone who is part of the family of requirements professionals, it is always exciting to find a new source of writings on the subject that is near and dear to our heart. I recently discovered the Blueprint Software blog that can be found here.

It is clear from reading the posts on their site that they care passionately about the subject matter. The articles are well written, informative and useful to both practitioners of our craft and consumers of our “dog food” :-). If you are into macabre humor, check out the post on the resetting “smart bomb”. So long as you are not at business end of one of these contraptions, it is really funny and drives home the point (literally) of good, clean, and above all, complete software requirements!

Last but not the least, if you have never seen the classic Dilbert cartoon on software requirements, they have a copy there. That alone is worth the price of admission, in this case your time and interest. Here is wishing them luck and hope they keep churning out great posts. At the end of the day, we are all better off with more great content. Enjoy.

Labels: ,

Requirements Defined Newsletter Bookmark and Share

Tuesday, January 12, 2010

Using a Group to Prioritize Software Requirements for Complex Multifunctional Projects

Prioritizing and identifying requirements that get developed in a release cycle can be a tricky proposition. It is one of the most important things we do as Product Managers. It is also one of the most challenging.

Most organizations use some means of categorizing requirements. Two common examples I have seen: Must Have, Important, Nice to Have. High, Medium, Low. There are far better ways of prioritizing requirements but the purpose of this post is to deal with another level of complexity altogether. How do we decide which functional unit or Department's features gets built when dealing with a large scale development project that spans multiple groups / functional areas within a company?

I worked on a very large and complex implementation of a factory system software where we had to confront and overcome this very problem. The software that was being developed would essentially control all the manufacturing activities of the factory. As such, the application had functionality for all the different aspects of manufacturing - product movement between different processing machines, product storage in the factory, maintenance of equipment, sampling of output, manufacturing processes, manufacturing instructions for each step in the process and so on.

There were well established Departments that handled each of these different aspects of the manufacturing process. For example, there were Departments for Maintenance, Quality Control, Materials, Inventory, Production and so on. Each Department had their own specific set of requirements - functionality they needed out of the application to enable them to perform their specific and specialized tasks.

When deciding which features to develop for a given release, we encountered the following problems:

1. A prioritized list of features (across all Departments) was not very useful since we ended up with too many critical features that made prioritization for a release extremely difficult and in many cases meaningless.
2. Critical dependencies that existed between Departments were missed. For example, developing some sampling functionality was quite useless unless certain manufacturing instructions and material movement capabilities were already in place.
3. The sum of individual parts did not quite add up to a desired total of functionality. This was largely because critical dependencies between Departments were missed.
4. Individual Departments graded the success or failure of development efforts based on the "quantity" of features they got in a release and not how the overall manufacturing process as a whole fared.
5. The political wrangling, infighting and shenanigans got totally out of control as each Department tried to get more features into each release, regardless of whether it made sense to the overall operations or not.
6. The overall development process was perceived by all the users as political, arbitrary and lacking in transparency.

We solved the problem by using a Group methodology to decide the features that got included in a release. The way the teams were constituted and the manner in which they functioned are detailed below.

The Team

1. Every Department for whom functionality was being developed was included in the Core Team.
2. Each Department was requested to provide three members to participate in the Core Team - One Manager and two additional technical members who represented the Department. These members were authorized to vote for and speak on behalf of their Departments. These individuals were collectively referred to as the Core Team.
3. Attendance at the Core Team meetings was not restricted to the Core Team members. Any number of attendees from each Department were permitted to attend. The only restrictions were as follows:
A. Only the Core Team members were allowed to speak on behalf of their respective Departments.
B. Any amount of consultation was permitted within the Department representatives and attendees so long as they did not disrupt proceedings.
4. Senior Managers, Vice Presidents and other senior executives were explicitly excluded from the Core Team. The output of the Core Team was later presented to them for final review and approval. They were welcome to attend any of the Core Team meetings but could not be one of the three approved participants of the meetings who were authorized to represent their Department.

The Ground Rules

1. Each team had one vote in determining priority of requirements.
2. Each team's vote had the same weighting as every other team.
3. Each team was required to vote on the prioritization of all requirements including those that belonged to their own team.
4. The decisions of the Core Team were binding on all Departments. The sole caveat was if the Core Team proposal was rejected by Senior Management and a change in priorities ordered. (Incidentally, this never happened).

How It Worked

With the Core Team in place, we prioritized requirements for a release in the following manner.

We first prioritized Departments for a release and then we prioritized specific requirements that would be implemented. This was a key change from the way we functioned in the past. In prior releases, in theory, all Departments were given equal weighting and the requirements prioritized for development. In practice, the number of features each Department got determined what priority they got in a release. We decided to take the politics and guess work out and made it the first decision the Core Team made for each release.

The prioritizing of Departments was not arbitrary. The first pass at prioritizing Departments for a release was done by the System Architect. He took inputs from Senior Management to ensure that there was alignment with the Business Goals and Objectives in creating the list. For example, if Management has defined "Reduction of Scrapped Materials" as a key initiative going forward, Sampling would move to the top of the featured Departments for the next release.

The Core Team was then provided the list, the rationale behind the list and given a chance to vote on the list. Each Department was given 15 minutes to make a case for why they should stay where they were slotted, move up or move down. (Yes believe it or not, once the process was locked in, Departments actually were willing to move down and not shy of saying so!). After the presentations were done, the Team voted for each position on the list and decided which Department would get slotted in where. There were a total of 8 Departments and if you made it to the top 5, you were guaranteed on getting a meaningful number of features in a release. The bottom 3 invariably got features but their features were lighter and typically done to ensure that no dependencies with other Departments were missed.

Once we decided on the Departmental priorities, each Department provided their own prioritized list of features. We usually restricted these to about 5 to 7 per Department for the first pass and iterated on till we felt we had no more development cycles left in the release. In presenting their list, each Department had to provide a justification as to why the feature being requested was important and how it aligned with management priorities. Votes were taken immediately and superfluous features eliminated quite efficiently. As this process was executed, we typically emerged with a list of candidate features for a release within 2 or 3 meetings. In all these meetings, a few representatives from Development were always at hand to provide guidance to the team in terms of effort and degree of difficulty to implement certain features so that the final list of required features was realistic.

This prioritized list was provided to Development for a final sanity check in terms of time, manpower and other resources for feasibility of executing within a release cycle. Based on their feedback, some additional fine tuning was done in terms of adding / removing features and the final list was generated. This final list was voted on by the Core Team and submitted to management for final approval.

Once we instituted this method, we saw the following benefits.

1. Features were implemented that made sense to the whole and not just individual parts of the overall application. These features were in alignment with management objectives and priorities.
2. A significant reduction in the number of missed dependencies across Departments.
3. A dramatic improvement in the satisfaction with the overall process by which features were prioritized and implemented for a release.
4. Development deliveries, quality and schedules improved. The features were frozen and did not change unless there was some unexpected business or technical development that dictated a change in features and schedules. These were for the most part minimal and when they did occur were always accompanied by an adjustment to the schedules.
5. Better quality product since every Department was better able to plan the time and availability of their key resources to provide the necessary support to the requirements and development process.
6. Higher quality requirements since there was much sharper focus on what was needed and going to be delivered.

The above methodology can be replicated easily and successfully in complex development environments where key stakeholders span different functional areas in the company.

Labels: , ,

Requirements Defined Newsletter Bookmark and Share

Friday, December 18, 2009

The Seilevel 2009 Software Requirements Holiday Medley

It's that time of year we all look so forward to, when we get to wish our colleagues around the requirements world a bit of SeiCheer with our holiday medley of songs. And worry not, if the 2009 collection isn't enough for you, you can go back in history and read our 2008 songs, 2007 songs, and our original 2006 songs!

Without further ado, sing along with us....

We wish you a Merry Release
We wish you a Merry Release
We wish you a Merry Release
We wish you a Merry Release and we hope it is near.
Requirements we bring to you and the team
Requirements for Release and we hope it is near.

Oh, bring us a new prototype
Oh, bring us a new prototype
Oh, bring us a new prototype and we'll love it, no fear

We won't go until we see it
We won't go until we see it
We won't go until we see it, so send the link here

We wish you a Merry Release
We wish you a Merry Release
We wish you a Merry Release and we hope it is near!

Oh Project SME
Oh Project SME! O Project SME!
Thy needs are so unchanging
O Project SME! O Project SME!
Thy needs are so unchanging
Not only at start are they clear,
But also when 'tis launch is near.
O Project SME! O Project SME!
Thy needs are so unchanging!

O Project SME! O Project SME!
Much time thou can'st give me
O Project SME! O Project SME!
Much time thou can'st give me
How often has the Project SME
Afforded us the scope for free!
O Project SME! O Project SME!
Much time thou can'st give me.

Rockin' Around the Requirements
Rocking around the Requirements
at the discovery workshop
Feature lists hung where you can see
Ev'ry executive tries to stop

You will get a validated feeling
When you hear voices saying
"Let's be jolly; Deck the walls with flows, oh golly!"

Rocking around the Requirements
Have a happy launch day
Everyone's drawing merrily
In a new best practice way

Rocking around the Requirements
Let the user stories sing
Later we'll write some data flows
and we'll do some modeling

You will get a validated feeling
When you hear voices saying
"Let's be jolly; Deck the walls with flows, oh golly"

Rocking around the Requirements
Have a happy launch day
Everyone's drawing merrily
In a new best practiced way

Rudolph the Brown-Nosing BA
Rudolph the brown-nosing BA
had a huge need to know.
And those who ever met him,
hoped they'd just let him go

All of the other BAs
used to scowl and call him names.
They never let poor Rudolph
join in any BA games.

Then one foggy release eve
VP came to say:
"Rudolph you are so very bright,
won't you guide my launch tonight?"

Then all the BAs loved him
as they shouted out with glee,
Rudolph the brown-nosing BA,
make our sponsors so happy!

Labels: , , ,

Requirements Defined Newsletter Bookmark and Share

Tuesday, November 17, 2009

First call for papers for RE'10. Can you hear the kangaroos calling us?

Pack your bags folks, we are heading to Australia in 2010! I'm excited to post the first call for papers (CFP) for IEEE's RE'10, held in Sydney Australia next September. I'll copy the key components of the CFP here, but visit the link for more details. I'm excited about this because we'll be working closely with conference organizers to ensure a very strong industry track - with many enhancements/improvements over past years conferences to help attract a great set of practitioners from the Product Management and Business Analyst communities as well!


CALL FOR PAPERS AND PROPOSALS

18th IEEE International

Requirements Engineering Conference

REQUIREMENTS ENGINEERING IN A MULTI- FACETED WORLD

September 27 - October 1, 2010

University of Technology, Sydney

http://www.re10.org

Important Dates

Technical and industrial paper abstracts due: February 12th, 2010 Full papers due: February 19th, 2010 Tutorials & Workshop proposals: March 15th, 2010.

Some Context

Software systems in today’s multi-faceted world are as diverse as the people who use them. While some are built according to rigorous government regulations, others must be delivered quickly to meet time-to-market deadlines or must be responsive to changing business needs. From a requirements engineering perspective, there is certainly no ‘one-size-fits-all’ solution.

RE’10 will explore techniques and methods for eliciting, analyzing, specifying, and managing requirements across diverse development teams where stakeholders often come from entirely different cultural, linguistic, geographical, and educational backgrounds; and across a broad spectrum of software projects that encompass both formal and informal development techniques and represent both small and very large scale projects.

Do you need help?

If you have never previously published at a major international conference, then you may be eligible for our mentoring program.

Please check the RE’10 website for further details.

Labels: , ,

Requirements Defined Newsletter Bookmark and Share

Monday, November 02, 2009

Resource with Tips for Virtual Teams

I am a huge fan of Thiagi's work on games to use in training. He has made many games publicly available for use in your own training environments. In the past I've done some writing (here) about how we've adapted his games to Requirements Engineering training courses. But today I was browsing his site and found something I thought might be useful to others. He has posted a list of tips for virtual teams. There are just over 100 simple tips, that if you just skim the list I'm sure you'll find a handful of them can be applied immediately in your organization.

Does anyone have comments on ones you will start to use or additional tips to add?


Labels: , , ,

Requirements Defined Newsletter Bookmark and Share