How Do We Talk About Iteration (Sprint) Commitments That Have Not Been Met / Done?
At the beginning of the Iteration (Sprint) we make a commitment where the individuals and the team commit to producing an increment of work / Iteration (Sprint) goals. The ideal situation is that the team does what it commits to do and demonstrates this at the Iteration (Sprint) Review / Demo. After a few Iterations (Sprints), most teams develop a rhythm and understanding of what they can do and meet the commitments they have made.
But the reality is that the commitment is not a guarantee. The team will discover things and things will happen even in as short a period as a sprint. For example, there could be an outside impact on the team (more production support than expected); the job could have been harder than expected (“oh my, look at this code - wtf?”); as the work was being done, it became clear that the requirements needed to change or be different; and so on. In these cases, from the stakeholder's perspective, a trust is broken in that the team did not deliver what it set out to do. How do we re-establish trust?
During the Iteration (Sprint) Review / Demo when the team is talking about the Iteration (Sprint) (e.g. by presenting the Iteration (Sprint) Burn-down chart as a backdrop), the Team should provide explicit data on the “not done” items so they can be seen to be accountable to the commitments they made. The start of this conversation is to talk about the planned velocity of the team and contrast that with the actual velocity achieved, listing out the stories that were not completed with anything that was learned as a result.
Here is the key idea. We are focused on the delivery of value and we want to be unambiguous about what the status is of the work we have done. Working software is the primary measure of progress. The assumption is that we worked hard to progress and complete the work and we prove that by showing Iteration (Sprint) Burn-down and other artifacts. We do not talk about how much effort we put into the work as that just sounds like we are making excuses. Worse it muddies the waters in the minds of our stakeholders as to whether something is really done or not. We talk about why the team was unable to meet the commitment and, if it makes sense, talk about possible learning we have made that we expect will allow us to deal with similar situations in the future. Keep the discussion short and focused on improving how we deliver business value.
This is not a blame game where we are looking for the scapegoat. But it is also in sharp contrast to what is often heard at traditional status meetings. For example, we often hear “I think we did a great job … if you just look at the results from this perspective.” We hear long winded excuses couched in techno-babble on why we weren't able to get the work done. Remember that
“'Technical success' is a euphemism for 'failure'.”
We often put blame on other people / organizations where, while it might be true that there was another party involved, it is also true that we can work the issue in a different way and meet the commitment.
I think we hope that people won't really see what this means, that we did not make the commitment, and hope that people forget that they were after specific business value.
No matter the motives, this approach does not increase a stakeholder’s ability to trust the team the next time around. The stakeholder has it in the back of her mind “what's to stop this from happening again?” The best response is to be upfront about the miss and clear about what you have learned as a result.
Note: This note was written on the basis of Iteration (Sprint) Commitments, but the reality is that the discussion could apply to any kind of commitment that we make. For example, in SAFe we talk about committing to PI Objectives and so the discussion can be applied to “how should we talk about PI Objective commitments we have not met.”