Seilevel
Seilevel Home
Back to Blog Home - Requirements Defined

Thursday, January 26, 2006

Measuring product manager performance on internal system products

Last fall, Austin PMM had a discussion around how to measure product manager (PdM) performance. Much of the conversation centered around products developed for external customers and focused on sales or profit metrics. My experience, however, has typically been in product management for internal systems where there are no clear-cut external metrics like sales or profit.

I feel that the best way to evaluate internal product development is to consider the major problems that often negatively impact IT projects. Many of these can be tied back to PdM performance so you can measure against them in an attempt to directly motivate improvements around those issues.
Some of these issues include:

1. Scope creep due to poor project definition and/or control

2. Missed requirements due to poor analysis and/or documentation

3. Requirements change due to poor analysis and/or documentation

4. Low user adoption due to dissatisfaction with solution

5. Lack of ROI due to non-realized benefits


For each of these issues, some related metrics against which to measure PdM performance include:

1. Scope creep – Track the requirements that are added beyond the original project scope.

2. Missed requirements – Track the requirements that are added and still within the original project scope.

3. Requirements change – Track the requirements that are changed and still within the original project scope.

4. Low user adoption – Measure user adoption rates and/or satisfaction levels (if they don’t have a choice about adoption).

5. Lack of ROI – Require the PdM to do an analysis of the costs and benefits of the system being developed before the project starts. At some point after deployment you would need to measure the actual benefit realized. Measure the variation between the planned and actual realized benefits.

For the first three metrics above, the team should agree on a baseline state of the requirements. After that point, an unbiased party could determine whether any added/changed requirements were actually missed, added beyond the original scope defined for the project, or changed after baseline. All of these metrics should be tracked through development and early deployment. These requirement issues should be tracked in an issue tracking system along with other defects.

For each of the metrics noted above, you should set a target allowance as a percent of total requirements. As an example metric, “The total missed requirements shall be no more than 1% of the total requirements of the project.” You could also take into consideration the priority of requirements by using a weighting system upfront in the project. If your organization has not done this, you should try not to set arbitrary target metrics. If you set a goal of “less than 1% missed requirements”, but your last project was 25%, you are most likely going to fail.

While the first three metrics above (scope creep, missed requirements, requirements change) motivate getting the right requirements the first time, the fourth metric (user adoption) specifically encourages the PdM to talk with end users when gathering the requirements. The final metric (ROI) motivates the PdM to really understand the benefit of the project before they sign up to the project. Another benefit to tracking ROI is that it encourages a positive relationship between the business and IT, since IT is where most of the project cost comes from.

These metrics all play nicely off each other. If you do the activities to ensure high user adoption, you minimize your risk of missed or changing requirements. However, you do run a higher risk of scope creep because the users will be excited that you are listening to their desires. This risk can, in turn, be mitigated by tracking against ROI as an incentive to keep scope in control.

None of these are perfect metrics as they all require significant thought and customization before being applied to an organization. In particular, it is important to realize that PdMs would need to have some level of control (or at least significant influence) over the factors that contribute to these results. For example, they need to be the final decision makers on the benefits, they need to be able to trust the cost estimate, they must be allowed to control scope, and they must be given access to end users. However, in the end, these metrics form a great starting point for a discussion around measuring the value that PdMs can bring to an internally focused development group and therefore allow organization to determine a course for potentially increasing that value.

Requirements Defined Newsletter Bookmark and Share

Tuesday, January 24, 2006

The impact of Agile development techniques on Product Management

Two weeks ago I attended the monthly meeting of the Austin Product Marketing and Management group. The session was titled Agile Software Development and its Impact on Product Management and was a panel discussion with three panelists and 30+ audience members. It was the second largest gathering I have seen in the year I’ve been attending Austin PMM meetings and it had a nice diversity of participants. Overall, I thought the session was a positive and informative experience, but my guess is that anyone who attended looking for clear-cut answers left disappointed. There were interesting explanations of Agile techniques and excellent anecdotes about their application at various companies, but I personally took away very little that I would consider definitively prescriptive in nature.

I am not typically a fan of panel discussions, but one of the things I really liked about this group was that they represented a nice cross-section of Product Management backgrounds. The panelists ran the gamut from a highly technical consultant with a background in software engineering and QA, to an extremely non-technical gentleman who made it absolutely clear that he was not an engineer in any way, shape, or form. Given the wide spectrum of backgrounds I see every day in people who work as Product Managers, I thought it was really effective to bring together participants with such varied experience for this session.

There were four key nuggets that I took away from the meeting and I will address each in turn. These weren't necessarily lessons but rather pieces of the conversation that I thought were most interesting.

Communication is key. Of course, all good Product Managers know that communication is the sine qua non of their success so this is one tenet of Agile that I think anyone would be hard pressed to find fault with. The panelists each stressed that constant communication with the development team is a key component of their approach to the role. The conversation ultimately turned to communication with customers and the general sense was that Product Managers need to be adept at instantly switching context between external communication (customers) and internal communication (developers).

Keep your eyes on the road. The statement was made by one panelist that a key aspect of applying Agile methods to Product Management for him is that it has forced him to rethink the way he looks at software requirements. Rather than trying to keep an eye on the future and seeking to understand how features will need to evolve over time, he seeks to keep things as simple as possible by focusing on what needs to be built today. He went on to apply a car-driving analogy that I believe he adapted from Kent Beck. His point was that if you are too busy fixating on any point too far in the distance you will always be crashing into the obstacles that spring up right in front of you. From a Product Management perspective, this means you need to put down the long-term product roadmap and worry instead about what your customers need right now. I found both this analogy and its underlying premise to be somewhat faulty, but this criticism is actually an excellent segue into the next topic.

It's all about balance. One of the aspects of the session that I found most interesting was the fact that it was rarely definitive. Multiple times there were emphatic statements made by the panelists that were later revisited and softened significantly after questioning by the audience. This is not a complaint, but rather is actually high praise for the panelists and their willingness to address the ambiguity of the real world and not dogmatically adhere to any sort of Agile party line. The roadmap example above is a good one since the panelist's original statement was eventually revised to recognize that there is value in understanding how current requirements will fit into future needs - "you have to strike a balance." That phrase, used repeatedly throughout the meeting, was probably the most valuable thing that anyone could take away from the session. Effective Product Management, regardless of development methodology employed, will always be about seeking the balance among a million different and often competing variables.

Agile is not for everyone. I know this final takeaway may likely be controversial, but it is one that was repeated in one form or another a few different times during the session.
  • "Agile methodologies do not seem to work nearly as well for large project teams."
  • "They are less effective on projects with offshore resources."
  • "They don't easily handle the huge, distributed efforts so common in many corporate IT organizations."
I am very interested in seeing evidence that is contrary to the above statements, but hearing them made so clearly during this session was unfortunate since they played right into the thoughts I already had going into the meeting. I wholeheartedly agree that Agile methodologies have a ton of value and can really make a positive impact when they are applied appropriately, but they don't change the fact that Product Management success will always be dependent on skilled practitioners who are able to strike the most appropriate balance for a given organization and its current situation.
Requirements Defined Newsletter Bookmark and Share

Wednesday, January 18, 2006

Requirements Model 2 - The Decision Tree


This is post number two in my continuing series of requirements model discussions. You can view my first post here. This time around, I'd like to discuss decision tree diagrams. These simple flowcharts can make your life much easier when writing a software requirements document.

A decision tree diagram really only has three parts - decision points (diamonds), end points (rectangles), and connections between these points (arrows).

These diagrams only really come in handy when you have a large group of nested 'if' statements in your requirements. If you ever find yourself writing something like this:

If option A is true, set option B to true. If
option A is false, see if option C is false. If option
C is false, check option D...

you might want to try showing it in a decision tree. Start by drawing a diamond for the first 'if' statement. Next, think about all of the possible outcomes of that statement. In the example above, option A can either be true or false.

If option A is true, you don't have to make any more decisions. You should draw an end point (rectangle) to indicate that you've reached the end of the decision pathway. If option A is false, however, you have to go through another layer of decisions before you can reach a resolution. This is represented by another decision point (diamond) connected to the first.

The number of connections coming out of a decision point is dependent on the type of decision being made. For 'true/false' decisions or 'yes/no' decisions, you will always have two connections for each decision point. For more complex decisions, you may end up with more.

The key to creating a useful decision tree diagram is to identify all of the decision points in the system and the number of connections coming out of each point. If you know these two key pieces of information, you can be very confident that you'll catch most of the requirements related to this decision tree.

How does knowing these things give you confidence that nothing was missed? If you put your diagram together with all of the decision points and connections for each, you can quickly see where the end points belong. Any connection that doesn't end in a diamond has to end in a rectangle. Now all you have to do is go back through, fill in the rectangles, and figure out the requirements that each diamond and rectangle represent.

The value provided by these diagrams stems from their structure. Without this structure, it is very difficult to look at a bunch of nested if statements and know whether or not you have captured every possible decision pathway through the system. The diagram lets you see this at a glance.

You can also identify usability problems in the system by looking at a decision tree diagram. If you have to go through too many decision points before you arrive at a commonly-used end point, you can bet your users will be frustrated with the system.

We've all used automated phone systems where you have to choose 14 options before talking to a person and you risk getting your call dropped with every keypress. These systems are perfect candidates for improvement via decision tree analysis. We used this technique on a call center application to help improve the customer experience. First, we documented the 'as-is' system with a decision tree. This quickly surfaced many dead-ends and complex pathways that we were able to eliminate. By rearranging the decision and end points, the new system cut call times and measurably improved customer satisfaction. We were also able to ensure there were no end points that did not allow some way out.

Decision trees are quick to draw up and, when used on the right type of problem, can greatly simplify your software requirements gathering process.

Requirements Defined Newsletter Bookmark and Share

Tuesday, January 17, 2006

When is a software requirements spec done?

The short answer is that a software requirements spec is done when you can convince the stakeholders to sign on the dotted line. The long answer is that a requirements specification is truly never done. It should be a living document that is updated throughout the project. As decisions are made or requirements clarified, the Product Manager needs to own managing the changes and keeping the document as complete as possible.

The question of "doneness" often arises when a project is using a traditional waterfall model because everyone wants to know how to determine when the software requirements are complete enough that the project can move on to the next stage. On a recent project, our team members worked with a development team that would not let us move from the requirements stage into design until every last outstanding question was answered because their performance reviews were tied to their ability to hit the schedule that came out of the planning phase. As a result, by the time the requirements were officially complete, the development team was probably over 50% done with their development. Even then, they still refused to sign off on the requirements document and continued to block the project's formal exit from the requirements phase. We finally got signoff when there were literally zero identified requirements issues (I have never seen this before).

We prefer a model that lets the phases of the development cycle overlap such that it effectively becomes an iterative model. Once a critical mass of requirements are complete enough that the technical team can begin making design decisions (typically 20-30% of the way into the overall requirements effort) the design phase begins. Our experience shows that there will always be some design issues that impact the requirements so it is better to work them out simultaneously rather than trying to perfect the requirements document in advance. However, by the time development starts, the requirements specification should be complete enough for signoff. There will almost always still be outstanding issues, but none of them should be big enough to prevent a formal signoff.

The exceptional issues that are big enough to block signoff are typically ones that have far reaching impact across the application. Examples might be a security system that modifies permissions across the application or a shopping cart that is passed from a catalog module, to a checkout module, to an order status module. In both of these examples it would be important to understand exactly how those pieces should work to ensure that the system doesn't have to be completely redesigned due to a fundamental change.

Asking whether the requirements are done is actually focusing on the wrong question. The right question to pose is whether the requirements are done enough - do you have enough information to begin the next stage and enough of an understanding of the outstanding issues that still need to be addressed.
Requirements Defined Newsletter Bookmark and Share

Tuesday, January 10, 2006

Making sure your spec is reviewed.

Do you ever get the uneasy feeling that no one is reading your software requirements specification? The reality is that the development team is probably busy on the last release and that the business stakeholders are busy doing their jobs in the business. What is a lonely Product Manager to do? How do you get people to read the specification? There are a couple of easy ways to know whether or not your spec is being looked at and to ensure that it really gets read.

1) Determine the number of changes requested in each section of the software requirements.

You might assume that you are brilliant if there are no changes (and maybe you are!). I prefer to assume that if I don't see changes then no one has read that section. For a large number of requirements it can be difficult to get a good sense of which requirements may not have been reviewed. If you map out how many changes have actually been made to specific requirements you can get a very concrete measure of what sections are actively being read. A requirements management tool is very handy for this, but if you are using a Microsoft Word document, you can still use a diff utility (or the built in diff function) to see if there are requirements that have essentially never been changed. We work on very large projects with multiple development teams divided into functional areas, and using this method you can tell which development teams are really actively and regularly reviewing the document.

2) Get written signatures.

When you ask a group of stakeholders if anyone has any issues with the software requirements document, many times you will get silence. Silence is not approval. You could take the easy way out and move forward, but that won't help you later when the software doesn't meet the business needs. When you actually ask stakeholders to sign their name on paper all of a sudden some of them need a few more days to do a final review. For some reason there is a psychological component to physically signing your name to a document; use this to your advantage.

3) Tailor the requirements for the audience.

There is nothing more painful than having to read a list of hundreds or thousands of shall statements. That simply is not how anyone thinks, not even us analysts. Use requirements models that your stakeholders can identify with to make it more likely that they will read and understand what they are getting. Models that the business can readily understand are click-action-response diagrams, use cases and flow diagrams.

4) Get verbal and written commitment from people before a requirements review.

There is a principle of influence which says that people try to maintain consistency. If you call people and get their commitment to review the software requirements specification before a review meeting, you will get a much higher compliance rate. It is a pain, but it really works. Even better is if you can get commitment for people to send their issues to you before the meeting. You can then specifically call people who haven't sent in their issues list. Remind them that eventually they will have to sign off in writing.

These are just a few of the techniques that we use to help ensure that the software requirements specification isn't just a paperweight gathering dust on a shelf.
Requirements Defined Newsletter Bookmark and Share

Wednesday, January 04, 2006

Development of a new Requirements Analyst

My name is Eric, and I am a Requirements Analyst. Before November, the acronym “RA” held only connotations of dorm hall meetings and stale pizza, as I have only recently shed my student skin and joined employed society. I graduated from the University of Texas with a Bachelors in Communication Studies and a minor in Business. I entered this industry a tabula rasa, hired with an acceptance of the absence of any technical training on my resume and the expectation that I would learn and adapt quickly enough in an industry with no room for intellectual light-weights. This post is the first in what will be a running series on my nascent professional development.

My interest in requirements was first piqued due to searching for a professional fusion of business and technology. I began my job search with all the bravado and naiveté one would expect from a recent graduate. I was unable to get excited about entry level business jobs that didn’t allow for pursuit of technical knowledge, yet shied away from techie training programs that left my business skill set underdeveloped. I saw working as an RA to be the right balance of business and technology to satisfy my competing interests. In addition to this, a primary tool in the RA toolbox is the ability to learn – to learn rapidly, thoroughly, and constantly. In other words, the same drive to acquire knowledge necessary for academic success is also a main ingredient for success as a Requirements Analyst.

I found this exciting, as I had done well as a student and escaped academia with my curiosity still intact. Replace paying tuition with receiving a paycheck, and the deal became almost suspiciously sweet. Certainly my learning on company dime would be focused on bodies of knowledge that advanced my projects or sharpened other industry-relevant skills, but this reveals another appealing difference between the world of a full-time student and the world of an RA. The metrics for success are no longer the clunky, abstract, and sometimes arbitrary letter grading system, but the more tangible barometers of project success (or failure) and customer (dis)satisfaction.

Upon arriving on my first day, I received a small mountain of literature, its components chosen to beef up my familiarity with four broad areas of knowledge. If they were freshmen level courses, they would have titles along the lines of:

1) Requirements 101
2) Essentials of Software Development
3) A seminar entitled “So You Want to be a Consultant”
4) A more project-specific elective, in my case “Computer Science for the Ambitious Amateur.”

The elective may seem generic and obvious, but given a different project or an RA with a different background, it could have easily been “Basics of Finance” or “UML for Dummies.” Required reading for the semester would include Software Requirements 2nd Edition by Karl Wiegers and Rapid Development by Steve McConnell.

I’ve now been on the job about two months, and consider myself to be in an enviable position. I enjoy what I do, and I’m steadily chipping away at my “New Guy” mentality and coming to see myself as an integrated part of the team. I hold no illusions about my rookie-hood, and “I have a lot to learn” is quite the understatement. However, we all have to start somewhere, and I am happy to be starting where I am. Tune in next time for Part 2!
Requirements Defined Newsletter Bookmark and Share

Sunday, January 01, 2006

Everyone is talking about it

I am encouraged by the the growing conversation about good requirements. For example, a recent issue of Manufacturing Business Technology had a good discussion about how quality requirements are necessary to save time and money in Best-in-class report proves out commonsense guide: minimize change (registration required).

Quoting Doug Putnam,
Change in [software] requirements is the thing that causes so many projects to go awry. It usually occurs in the last third of the project as pressure mounts to get it out the door. Teams compromise the original design and try to retrofit the changes back in, causing a lot of turbulence.
The article also states best-in-class projects are almost 3.5 times faster to market, and nearly 7.5 times cheaper. And if this isn't a good reason for managing a project correctly, I'm not sure what is.

At the same time good requirements are becoming known as commonsense, there are a number of online conversations about the best way to capture functional requirements. I disagree with much of the discussion and will save that for a different post, but it is great to see the debate raging on.

For a couple of samples of the online debate about how to capture requirements, start by reading the following posts and comments: The interface as spec (Signal vs Noise by 37signals) and CRUDdy use cases and Shakespeare (Tyner Blain).
Requirements Defined Newsletter Bookmark and Share