Blog Entries

Last 5 Blog Posts

As part of a plan for the day, I pull together a list of meetings that will happen today and put an estimate on the meeting based on the scheduled time for that meeting.

As I've watch myself work these I've found that this approach grossly underestimates the impact of the meeting on my day for a number of reasons:

  • Meetings typically require preparation. Unless the meeting is one that is on a cadence, meetings seem to require about the same amount of preparation as the actual length of the meeting. Even meetings on a regular cadence often need some level of preparation work (for example, Sprint Review requires that I put some words together to describe what happened during the Sprint).
  • Meetings typically have follow-up actions. Unless the meeting has been mainly about communication of ideas, there will be some level of actions and recordings required from the result of the meeting. Even meetings on a regular cadence have these (for example, weekly Penguins meetings has specific actions for me, and a requirement to record the results in some place.)

My plan for today therefore typically includes:

  • The actual meeting. The meeting itself is time-boxed. If you have taken the time to get others involved in the meeting, there must be some level of importance. The work is classified as (I)nvest so as you get to the end of the time-box you should ask “what value will we get out of extending this meeting?”
  • A “meeting preparation” task which serves both as a reminder that something is coming up, and as a placeholder to actually do something. The typical work is administrivia or low level research to get a feel for what you think about the issue (without necessarily forming hard conclusions) and so the work is classified (O)ptimize. Sometimes a lot of research is required to prepare for the meeting, in which case you might look at additional tasks that are probably classified as (I)nvest.
  • A “meeting followup” task to address actions, take care of any recording. Typically this is the administrivia as well as dealing with the actions means new user stories and so the work is classified (O)ptimize.

The impact is clearly larger than the meeting itself. For a 1 hour (~2 iterations) meeting, we typically have 1 hour (~2 iterations) of preparation work and 30 mins (~1 iteration) of administrative follow up work. On a fully booked 6 hour (12 iteration day) you are looking at 5 iterations for any meeting in the day,or ~40% impact on the day. Huge! The impact is probably more than that in that the meeting often represents a context switch and often (say 50% of the time on average) is scheduled so that I do not get a complete iteration to do work (ie interrupted, or more likely I simply don't bother to start an iteration.)

If this level of work is required (and I think it is in order to make sure that meetings I call are effective) then I need to question if I call a lot of meetings each and every day, am I really being effective?

2016/06/20 18:51 · hpsamios · 0 Comments · 0 Linkbacks

I am not sure entirely how it happens. As part of training for Scrum Teams, we introduce the concept of Technical Debt so we can reduce the harm we are introducing into our code through shortcuts, and to start a discussion about investment required to bring down the technical debt associated with our fielded products. We talk about Ward Cunningham's original definition of Technical Debt. We talk about how to reduce creation of more Technical Debt through Definition of Done. We talk about ways to reduce Technical Debt, by characterizing it, and making it visible so it can be dealt with.

The problem is that the idea of “addressing Technical Debt” becomes synonymous with “fixing defects” over time. Or at least it does in shops I've worked in. The thinking is “if we are fixing defects we must be reducing our Technical Debt” and we feel good that we are doing this. Further it leads to thinking that we can “take on” Technical Debt as a business decision and that is OK as we can plan for defect fixes in the future, determine how much it is and so on.

Lets be perfectly clear. Addressing Technical Debt is not the same as fixing defects.

Technical Debt is the poorly structured code (the 30,000 line function that grew over time), the code that people don't want to deal with (the module that no one or “only George” can touch without fear of breaking it), the code that obscures its intent, the code that is so rigid it cannot be changed, the code that is not covered by automated tests, the code that cannot be easily read. In other words it is the cruft we've built up in our code base because of shortcuts and poor decisions we have made in the past.

Addressing Technical Debt may help you in the future by reducing the number of defects. For example, if you re-factor code that is the source of a lot of support calls so that it is easier to work with, you may end up with fewer support calls on that area of code. But the reverse is not necessarily true. For example, by fixing the defect you may in fact introduce another coding short cut and so increase Technical Debt rather than reducing it. This is especially true if you take the view that you then want to manage new defects rather than simply getting rid of it while the context is fresh in your mind.

More importantly if you make changes that improve your ability to make changes and reduce the number of defects you have then you also have more time to work on items that actually produce value.

Creating Technical Debt should not be intentional. We shouldn't normally create situation where we manage bugs we have just introduced. We should write the code so there are no bugs. In particular if you leave the defect in the code that you've just worked on then this just means you've done poor work. And calling it a more formal name like “Technical Debt” won't change the fact that you knowingly did poor work in an area. As Uncle Bob Martin says "A mess is not a technical debt. A mess is just a mess."

How do we go about changing this perception? Perhaps the simplest approach is to get a group of people in a room and have them brainstorm what they think “Technical Debt” is. If your experience is like mine, you'll get a number of people saying something like “fixing defects.” This allows you to work a more nuanced discussion about defects, technical debt, and approaches you have toward improving quality of the product.

We also need to make sure we start dealing with Technical Debt we have in our fielded products. We need an incremental approach to addressing existing technical debt, but something that allows us also to plan a bit better how we take this on. In the book The People's Scrum Tobias Mayer offers a slightly different take on an incremental approach to paying off technical debt based on some work with a team in New York City:

  • When a team member comes across bad code during a Sprint they don't fix it unless it was crucial for the feature they were actually working on. Instead they note the file, line numbers, features that the poor code touched and track this as “Technical Debt”. (Note: the team did this with sticky notes, but I expect there are lots of ways of doing this).
  • During Sprint Planning as the team discusses potential stories to be worked on, team members quantify the known danger areas in the code using information tracked as “Technical Debt.” We now use the policy above (“never add more debt”). Coding around existing, known problems adds debt. Taking this feature on in a Sprint would require that the technical debt is addressed as well. This would be discussed as part of the overall plan and the estimate for a user story increased based on this known technical debt information.

If the product owner still wants the feature (it may have got very expensive in comparison to initial estimates) the team would add the Technical Debt items as (sub-)tasks on the user story.

2016/06/20 18:42 · hpsamios · 0 Comments · 0 Linkbacks

When an organization transforms to a Scrum / Agile one of the key things we change is the way we determine who does work. Previously we would have mangers decide which people would do what work. If projects were large we’d build extensive systems to track the allocation of “full time equivalents (FTE)” to projects. In other words we brought people (or in fact parts of people) to work. With Scrum the unit of execution is not an FTE but a team (software = team). With Scrum the idea is that we want a team to work together for a period of time as, when they do this they become more than the sum of their parts and produce more value. To allow this to happen we change the “resourcing” problem so that we bring work to the team instead of forming groups of people around work.

This is a big change for an organization that has grown up with traditional project management. It is a total change in the way management thinks about software development and a difference for the people doing the work as well. Many times there is still a tendency for management to want to tinker with teams to deal with perceived work issues they are having negatively impacting productivity. To management this makes sense in that most managers can report results where, in the past a change in personal had the desired result. To help make the Scrum transition we establish a prohibition against making changes to teams so that people don’t revert to “old behavior” in the face of pressure. But is this really the best approach?

At first blush, the approach does make sense. How did we figure this out? Scrum Masters track data so they can have (data-backed) discussions with team members about what teams need to do to improve, especially for retrospectives. Part of what is tracked is the actual velocity of the team (how much work is done in a Sprint) and the number of team members that are on the team in that Sprint. Collecting this data allows us to understand the relationship between changes in team size and velocity (see Mike Cohn’s excellent blog on the approach we used).

Our first analysis against one set of velocity data showed that the overall impact of adding or subtracting a person from a team was a reduction in velocity by about 21% for the next three sprints on average. So at least this is a short-term confirmation that change has a negative impact on average. This is useful information for planning purposes. The message is “don’t assume that when you make a change to the team that you’ll see an immediate improvement.” In some ways this is validation of Brooke’s Law.

However this is not the end of the discussion. The question that results from this analysis is “does the velocity of the team recover” and “what conditions allow this to happen”? If we assume that team becomes a “norm-ing” team after 6 sprints on average (this is accepted knowledge within the Scrum / Agile community) we can understand the impact changes we have on teams by looking at:

  • High-churn teams: do not have consistent team members for 6 sprints in a row for their life
  • Low-churn teams: had at least one stretch of consistent team members for 6 sprints in a row during their life, one or two changes of a year
  • No-churn teams: consistent team members for their entire life of the team

Before we run the analysis we need to make predictions:

  • No-churn teams should result in the most improvement (we used “targeted value increase” or TVI as an indicator of this – see Scott Downey videos on metrics), followed by medium-churn teams, then high-churn teams.
  • High-churn teams will result in increased / decreased velocity based on increasing / decreasing number of team members. This is because teams have not jelled so the total ability is simply the sum of the parts.
  • Low-churn teams will result in decreased velocity for some part of the next 6 sprints no matter if you add or subtract team members (impact on team dynamics, return to velocity should be within 6 months).

Starting with prediction number 1, we find that our prediction does not pan out. Looking at the chart below we see that the most increase was with low churn was in fact higher than the no churn team, whereas, as expected they both had more increase than the high churn team.

We need to dig a little deeper to understand what is happening here. Interviews with the management involved in these changes revealed that there are differences in what happens based on who is being added or subtracted from the team, that there is a place for judicious change:

  • Removing a toxic / ineffective member of the team. The rest of the team is seen to “break free.” We have seen examples of a particular person on the team being very negative about the work so that it brings down the rest of the team. We have seen examples where a team member is seen as ineffective by the other team members so the team feels that person is not pulling their weight. In both cases, removing that person allowed the team to improve.
  • Adding a specific skill set to a team, to overcome something which is impeding delivery. We have seen cases where we need to overcome a problem with the initial formation of the team in that the team was not able to support the kind of work it was being asked to do because of lack of technical skills. Adding people with the appropriate skill set to the team allows the team to improve. Obvious, right? Not so obviously, we have seen changes by addressing the more non-technical skills. For example, replacing a Scrum Master who is “going through the motions” with a person who has a strong sense of Scrum practice and can inspire a team to more effective behaviors resulting in improved team performance.
  • Changing the dynamics of the team. The best example of this is changing a team that has remote people on it, to teams that are co-located. In our case we had a number of teams that were 50 / 50 US and India based. They were set up as the Indian people were typically new hires and so needed extensive mentoring to get them started. Over time, the Indian parts of the team were increasingly able to complete work without US assistance. By splitting the teams based on geographical lines we improved the output of both halves, since there was a lot less communication overhead.
  • Fresh blood. Sometimes the addition of a team member results in an infusion of new ideas to the team resulting in an overall pickup. Perhaps the ideas come from the other team the new team member worked on. Perhaps its just the enthusiasm generated when a new person is brought in (“management must think we are doing important work as they gave us this new person”). Irrespective there seems to be a positive result just based on being new to the team.

Turning to the second prediction, that if you add or subtract a person from a high churn team, you basically have a similar increase or decrease in velocity going forward (since teams members have not jelled so the total ability of the team is simply the sum of the parts.) This prediction basically pans out. Subtract one team member reduces the velocity for high churn teams, while adding a team member increases the velocity.

Note that I would ignore the results for the “-2” and “2”, “3” as the number of teams that actually went through this level of change are 3 or less in each case (versus more than 20 for the “-1” and “1”) case.

The final prediction that low churn teams will result in decreased velocity for the next 6 sprints only partially holds. If you take away a person it will take the team about 5 sprints to recover. Interestingly, if you increase a low churn team by 1, you will see an increase in velocity starting with the 3rd sprint. This seems to back the idea up that low churn teams have changes that are the result of judicious adjustment and so in general the effect is positive.

One final point before the summary. If the new team member joins a team and doesn't understand the basic process of the team you can expect longer adaptation times. Although I have no data to back this up, one reason we are able to do “judicious changes” is that every single person in our organization has gone through 2-3 day Scrum training, which talks about both principles and practices and aims at getting teams started using Scrum the very next day. This means at one level it is easy for people to move around teams as they have a common language (based on Scrum and Agile), a common set of artifacts (backlogs, definition of done), a common set of ceremonies (Sprint Planning, Sprint Review, Sprint Retrospective) and common roles (Scrum Master, Product Owner). Every team has different work, different culture, and different processes, but this basic Scrum practice is constant and allows for easier assimilation.

In summary:

  • Making a judicious addition to the team will increase the velocity of the team on average, the effect will be seen pretty quickly and the team has the potential to increase its velocity a lot (more than the sum of its parts) providing it happens no more than once every 6 sprints.
  • Subtracting a person from a team (again judicious) will recover velocity on average within 3 sprints.
  • High churn teams should be avoided.
  • We might want to look at changing things up with no churn teams to freshen things up.

Most of the analysis here is done using averages on good sized populations. As we worked these issues we found that while this might offer insight and guidelines for the general case, there are always exceptions to the rule. To determine what makes sense you should probably look at the velocity / membership information for the teams you are working with to determine whether there is an indication that something might need to be addressed. The next step is to the work with the team to understand what is really happening. In other words, the data is just that – data – and no decisions should be made simply using the data alone.

My thanks to the following people for input into this:

  • Scrum Masters, for collecting the data to support the analysis
  • Bob Schatz for discussion on how “who gets changed” is an important consideration
  • Scott Duncan for discussion on how the predications were expected to turn out
  • Wayne Morgan on specific people changes we have seen and the results from those changes.
2016/06/15 08:52 · hpsamios · 0 Comments · 1 Linkback

As a result of some “advanced training materials” I have been working on, I've been thinking recently about how Scrum is helping us and how we expect it to help us more. This has driven me to deeper thinking on the nature of software development. Over the past couple of Agile conferences I've come across references to the Cynefin framework. For those that have not heard about this approach Cynefin offers a way to look at the nature of organizations and decision making and of understanding levels of complexity as we make decisions.

Cynefin divides the problems into 5 domains:

  • 2 ordered domains, simple and complicated
  • 2 un-ordered domains, complex and chaotic
  • 1 domain called “disordered” when we don't know what domain we are in.

When you are trying to categorize (note: this is one use of the model) the problem space you are working in, you inspect the relationship between cause and effect of the problem space. If the relationship between cause and effect is straight-forward and obvious to all, then you problem is in the simple domain. If the relationship between cause and effect is not obvious, but can be analyzed in advance, then you have a complicated problem. If the cause and effect can only be determined with the benefit of hindsight, then you are in the complex domain, while if there is no obvious relationship between cause and effect, you are in the chaotic domain. Pictorially this is represented as:

Sounds logical, right, but how does this help understand development? It turns out that each of these domains has different approach that helps us best deal with the context:

  • Simple Domain. The response in the simple context is to “sense - categorize - respond”. People assess the facts of the situation, categorize them, and then base their response on established practice. If something goes wrong, an person can usually identify the problem, categorize it and respond appropriately. This is the domain of the “best practice”.
  • Complicated Domain. The response in a complicated context is “sense - analyze - respond.” This approach often requires expertise - I can detect there is a problem with my car, but I have to take it to a mechanic who can analyze the cause and get the problem resolved. Because the complicated context calls for investigating several options and there may be more than one “correct” solution, identifying “good practice” (not “best practice”) is the best approach. This is the domain of the “expert”.
  • Complex Domain. The response in the complex context is “probe - sense - respond.” In a complicated context, at least one right answer exists. In a complex context there may be no known “correct” solution. Instead of attempting to impose a course of action, we must allow the path forward to reveal itself. This means we have to create environments where experiments (probe) allow patterns to “emerge.” This also means increasing levels of interaction and communication between stakeholders so that changing understanding of the situations is worked on and communicated to all.
  • Chaotic Domain. The response in the chaotic domain is “act - sense - respond.” In the chaotic domain, our immediate job is not to discover patterns but to stop the pain. We must first act to establish order, then sense where stability is present and from where it is absent, and then respond by working to transform the situation from chaos to complexity, where the identification of emerging patterns can both help prevent future crises and discern new opportunities. This is the domain of rapid response - “novel practice.”

OK, great, but what now? If we look at the problem of software development, we can see we often have a complex problem. When things go wrong, it is often obvious with the benefit of hindsight what happened and why. As Steve McConnell says, “There is a limit to how well a project can go but no limit to how many problems can occur.“ The problem is that we cannot always learn enough to say “in this situation we will fix it in the following way” as there is a different problem the next time around. This is not to say the whole process is complex, but significant parts of it are.

The fallacy of the past is that we have treated software development like it is a complicated problem. We'd analyze and hope to come up with a plan that made sense. We'd then be surprised, time after time, when it didn't work out. With the benefit of hindsight, no pun intended, we can now see why this did not work. We were often dealing with a problem that is in the complex domain.

For example, take something as basic and common as the estimation process. It is pretty clear that if we estimate a piece of work and if we focus on improving our estimates over time our estimates will be often good (see Our Estimates are Terrible!). But we also know that we will be wrong and sometimes in a pretty dramatic fashion. In a software project, this causes problems because it means we are not going to be able to predict our overall plan. But is it really because we have a poor estimate, or more because of the nature of the process? If you analyze what happens you will often find that a problem with an estimate is because of some unforeseen event (change in requirement, understanding of software architecture) not as a result of poor estimating. This is not to say we should not estimate. It only says that we should understand the limitations of the data we are providing and act accordingly.

We can now we can see why the Scrum approach helps. Tools such as iterations with their planning and reviews / retrospectives offer an approach which mimics the “probe - sense - respond” approach of the complex domain. The emphasis on collaboration and communication also help us dealing with the complex environment. It also helps us understand why we cannot just “best practice” the software development process and why even identifying good practices might not be ideal in all situations.

As we work to improve the results of our software development process, we need to remember that a lot of the issues we face are “complex”. This means we need to apply the “complex” tools to help us address the problem.

As a final note I have talked about only one aspect of Cynefin and have simplified the thinking process for a particular application. There are many other aspects that can be brought to bear using this tool. By way of background:

In addition I have leaned heavily on the work of others (eg Dave Snowden mentioned above, John Helm (Agile 2013 Conference), Simon Bennett (Agile 2011 Conference)) to help me pull this together from the perspective of software development. Brilliance is a result of their thinking. Errors are mine.

2016/06/10 14:27 · hpsamios · 0 Comments · 3 Linkbacks

Or “aggressive product backlog triage”.

In Scrum we know that requirements are tracked in the product backlog (actually it tracks more than that, but this subset of product backlog items is certainly there). If we are not careful, this quickly leads to the idea that the product backlog is where we do what we traditionally called “requirements management”. We end up with the idea that “if we think it is a good idea it should be on the backlog” which results in an ever increasing list of requirements. My view is that this is a problem as, as the list increases, we have an increasing number of backlog items that will never actually get implemented. But we still apply time to managing this list, working on improving how we track this list and so on. As Peter Drucker says “There is nothing so useless as doing more efficiently what should not be done at all.”

Think about some of the effort you will waste by managing a large list:

  • You will waste time every time you go back and re-visit this list of requirements and this will happen at least once every planning session. This is the same as having 600 items in your email inbox and having 5. If you have 600 you will spend time working, worrying those 600 even if you don't think you do. (If you don't believe me, work to halve your inbox and see how much better it is).
  • There is a high probability that something that “did not make the cut” this time will also “not make the cut” next time. The business has moved on and there are always high priority items coming in.
  • You will lose touch with all these requirements. Having a huge list means its hard to keep everything in mind and so more difficult to focus on what is important.
  • There is an overhead associated with large lists. Every time something needs to be done, you are now positioning in a huge list so there it takes longer to do simple things. And if you have huge lists you need something to categorize them into meaningful groups so its easier to think about them, and so there is overhead in terms of setting up and maintaining a management system associated with this.
  • There is no relationship between being “on the list” and “having a chance to actually get done”. We may feel better telling customers that “it is on the list”. It may be easier to communicate with stakeholders that it is “on the list”, but the reality is that only those at the top of the list have any chance of actually being worked. What is sad is that I suspect that people who hear “its on the list” understand that it really means “you aren't going to get it”, but we still maintain the fantasy anyway.
  • In one way this list represents a list of “work in progress”. What we know from queuing theory (Little's Law in particular) is that if you have a big batch, then you can increase throughput by reducing the size of the queue.
  • Clarity is lost for anyone looking at the list. Because there are special labels, meaning associated with fields as we try to manage all this data, anyone coming in to the list who does not have these special encoding in mind will form an incorrect or incomplete understanding of what the list means.

If you have items in the list that are not actually going to be worked, then you have a set of effort which produces no value. How much effort you have wasted will depend on what you have done. If you just wrote a user story, the effort is low. If you've defined Conditions of Satisfaction, or had the team estimate the item, then there is more wasted effort. You won't be able to get rid of all this wasted effort (for example, you might need an estimate on something to decide whether an item is worth considering) but if you do this for each and every item in the backlog it adds up to a lot of non-value producing effort. The question you have to ask yourself is this overhead of the big list worth the result. My view is that in most cases it is not, and that a more aggressive approach to maintaining the product backlog makes sense.

How do you clean up the list you have? In the first instance you will have a natural reluctance to delete “a perfectly good record” and so perhaps the best idea is to set up an “attic”. The idea behind the attic is that it is a special classification of product backlog items where, in the normal course of work, you never see the stuff that's in the attic. You then go through your product backlog and decide which items need to be put up into the attic. You will also need to make a call as to how (or perhaps a disingenuously whether) you will tell interested stakeholders about the disposal of their items.

The criteria for defining candidates for the attic will vary based on circumstances but here is a set of starting ideas. Firstly understand how much you will likely be able to do over a period of time to set an upper limit on how much should be in the product backlog. For example if you are working a single team, then use the velocity information over a time period of “the next couple of releases” to set an upper limit either in terms of number of stories or, more usefully based on the number of points that are in the product backlog. So if your team velocity is 50 on 4 week sprints and you have a yearly release cycle, you could set the upper limit to “this release plus a little of the next release” (50 (velocity) x 12 (sprints per year) x 1.5 (releases) =) 900 points. Now you have are target size of product backlog based on “what might actually be delivered”. For multiple teams on a single product, apply the same thinking.

How do we get to 900 points? You can start by looking at some large groups of potential attic candidates by establishing a criteria:

  • Have a look at your product roadmap. Are there things on this list that are not related to you product backlog? If yes, then get rid of these.
  • Put the bottom half of your backlog in the attic. Trust me, even without looking at your backlog, unless you are already doing aggressive triage on the backlog, there is a bunch of stuff that you should never revisit. My assumption is that these are already toward the bottom since most people work actively on the top of the backlog. So put them in the attic.
  • Set an “age” criteria. For example, “if the item was created more than 2 years ago” or “last viewed more than 1.5 years ago” then put them in the attic. The thinking here is that if its been there for a long time and we haven't done anything about it then it is unlikely this will change in the future.
  • Set a “completeness” criteria. For example, “must be in use story format, have conditions of satisfaction, have “investment allocation” filled in and have an estimate”. The thinking here is that if the backlog item is not a good item, then the team won't work on it any way. This has the side effect of getting your important product backlog items “complete” as part of the process.
  • Set a “type” criteria. For example, “we will only work 20% of capacity on defects” and use this formation to weed out long lists of specific types. The thinking here is similar to setting the upper limit in the first place.
  • Set a “priority” or “value” criteria. For example, “we will not work any P4 / low priority items reported internally”. The thinking here is that it is of little importance, we aren't going to do anything about it.
  • Or combinations of the above, or other criteria you have.

The really brave among you will set up a few of these criteria and simply apply it. After all, if we put things into the attic, its easy to get it back down from attic if we need to. What you'll mostly find is that when something important “comes back” again (and trust me, important items will come back) and you go up into the attic to find the relevant record, the attic item will not be quite the same thing and so you'll end up creating a new item that reflects current understanding.

After applying this kind of thinking you will probably find you still have too many items in the product backlog. The low hanging fruit is gone, so now you have have to go through the items individually and decide on disposal:

  • Go through each of the items on in your product backlog. Ask yourself how the item fits into the vision and goals you have established for the product (you do have release goals, right?). If it doesn't fit in perhaps it is a candidate for the attic.
  • Rigorously order the product backlog. Many people explain to me their product backlog is not ordered (or to apply old term, prioritize). While there are a number of reasons for this, if your product backlog is not ordered per what you think is important, it is still ordered in some way, and you are missing an important tool to get the best out of your team. If the order is really not important then it won't matter if we put a line the cumulative points in the backlog sum to 900 and kill anything lower than that. Most people would balk at this approach which implies there really is an order. Get your backlog ordered so you can make these kinds of calls.

Once you have cleaned up your backlog, you now need to set things up so that you don't re-create the problem you had. Again, your approach will vary, but the base approach will come from the work you have done so far. Now that you have an upper limit on the number of items in the product backlog you can establish rules for accepting new items. For example you could say “new items can only be added to the product backlog if an existing item of equivalent size is removed from the backlog.” This approach can be applied to individual product backlog items, but also to groups or items in the case where we have a new “epic” requirement that is more important that items previously on the list. The approach will force a certain degree of “readiness” criteria for new items coming in. For example, it is hard to make this call if you don't have estimates on new items. Once again, when something is removed you will need to determine how you communication this to relevant stakeholders.

How you do all this and who is involved will depend on circumstances. The on-going triage after you have done the clean-up work will need to happen regularly otherwise you build up too many new items for consideration. When this happens I've seen teams simply ignore the build up leading straight back to the original problem, only now it will be harder to go through the initial clean up (“we tried this before and it didn't work …”). One approach is to do this kind of work during a product backlog grooming session but this is not the only way of achieving the result.

Once you have gone through all this, what will you have?

  • A product backlog that actually has a chance of being delivered
  • A simpler backlog of important items that by comparison are easier to track mentally
  • A process for on-going triage aimed at keeping the product backlog healthy
  • A set of product backlog items that are ready for use by the team and that has had good discussion with the team
  • The potential for more realistic communication with stakeholders (its up to you whether you take this potential)
  • Less wastage as we make calls early on what does not get done, and so do not apply further effort to it
  • Increased clarity as there is less need for special understanding of the product backlog

To me this seems like a worthwhile set of benefits over the traditional approach. And, after a period of time, you can delete all the items in the attic. This will happen when you realize that, in fact, no one has visited any of the items in the attic for a long time (just like the attic at home).

2016/06/06 18:19 · hpsamios · 0 Comments · 1 Linkback
You could leave a comment if you were logged in.
  • /home/hpsamios/hanssamios.com/dokuwiki/data/pages/blog_entries.txt
  • Last modified: 2016/07/03 13:38
  • (external edit)