User Tools

Site Tools


how_can_we_improve_the_quality_of_feedback_at_an_iteration_demo

This is an old revision of the document!


How Can We Improve The Quality of Feedback at a Sprint Review?

I’ve had a number of discussions with teams recently where people are questioning the value of holding a Sprint Review. The feeling is that we are not getting good feedback from our internal or external stakeholders. Teams that raise this issue are often frustrated in that their prior experience with Sprint Reviews had been excellent. Today it feels like we are just going through the motions and teams find themselves in a bit of a rut.

Some of this is, of course, a self-fulfilling prophecy. If you are not excited about the work you have completed and the Sprint Review you are about to undertake, how can you expect your stakeholders to be excited. It turns out that many teams are, in fact, in a rut. Symptoms include:

  • The team has the same agenda for a Sprint Review that we had when we first started doing Sprint Reviews. They still talk in detail about team members even though there has been no change since the last Sprint Review. This leads to a cargo cult mentality where people are just going through the motions because that is what they feel they have to do.
  • Management is invited, but increasingly do not show up. In the past there was excitement to see progress via a demonstration but today that is considered normal and reporting of progress is so open and transparent that management don’t feel the same need that they did in the past to participate.
  • When management do show up the only thing that is heard is “good job.” Some management are not sure what else they should be saying to a self-organized team. Sometimes management is there in body but not in mind (eg engaged in email on their cell) leaving a poor impression about the importance of the work or their willingness to work issues.

But much worse than this, we are also not getting any feedback from customers.

The purpose of the Sprint Review is to get feedback on the increment just delivered by the team. Like all feedback the idea is to do this so that feedback is easily given (“frictionless”), timely and in the appropriate context. Done well, the Sprint Review has all these characteristics. We schedule a regular meeting immediately at the end of the Sprint, only demonstrate what has been done and then talk through what we’ve seen.

Some things that will help:

  • During the next Sprint Review make sure you are clear about what happened to the feedback in the previous Sprint Review. If you want your stakeholders to participate, then they need to know that you are listening. This does not mean that you have to do everything that they say. The point of the collaborative nature of the feedback is that you discuss, not just agree. A valid response might be “we heard about this, think its a good idea, but think that we need to do these other things first - does that make sense to you?”
  • In terms of the agenda:
    • Don’t go through every single user story describing it. Talk about the major goals or themes for the sprint (maybe list the individual stories that correspond to each theme).
    • Don’t dwell on raw statistics like “velocity” overly much. Instead, mention whether this sprint was “better”, “worse” or “about the same” compared to historical velocity trend.
    • Do get as quickly as possible to the things that we really want to discuss with our stakeholders. If you take more than 5 mins to get to the demos you might want to review what is happening and make sure that what is happening helps rather than hinders the purpose of the Sprint Review.
  • Don’t ask for feedback, but rather drive the conversation. For example you could say “As we were working this item we were thinking that this other feature is required, what do you think?” or “We came up with two approaches to this and settled on this one – what directions would you have taken?” While you may not change anything as a result of saying these things, you will encourage a different level of participation which will offer up improved opportunities to learn. Other ways to drive the discussion include:
    • Discussion of impact the features will have on administrators, implementer in the field, and other features under consideration
    • Discussion of items that affect development work in other teams
  • Use silence to create pressure. If you ask a question of your stakeholders, don’t just provide the answer to them. Shut up! And stay quiet longer than you feel comfortable staying quiet. It is surprising how often people will speak to fill a void, and also surprising what you will learn as a result.
  • Split the job of facilitation (Scrum Master) from collecting and discussing feedback (Product Owner). Some Scrum teams report that the Product Owner behaves more like a stakeholder than a team member. The Product Owner should drive feedback conversation since it is a conversion that directly impacts the Product Backlog. Also customer relationships are often through the Product Owner and so a lack of participation by the Product Owner creates a strange dynamic.
  • Make sure the Product Owner does a summary of what was discussed and potential impact on the product backlog. This makes stakeholders feel that input / involvement in sprint review was useful.
  • Place sprint review in overall release plan to make sure there is context (we are here on this trip) and show progress (we are getting closer to the end). This improves perception of value of involvement in the Sprint Review as well.
  • Don’t be afraid to rotate (especially customers) stakeholders off the Sprint Review and bring in a fresh viewpoint (and excitement). I understand this has political overtones, so be careful how you do this.
  • Some teams indicate that they are getting feedback but it is not related to the subject at hand (eg bug reports, demanding new features). If there are issues not related to feedback that arise in the review, the Scrum Master should facilitate – “I will set up meeting to discuss …”
  • Occasionally ask your stakeholders for ideas on how to improve the Sprint Review. In others words do a Retrospective of the Sprint Review. This could be done at the end of the Sprint Review or as a separate activity (eg survey). Start the conversation with “We want to make sure that the Sprint Review offers a good return for the amount of time you have invested with us … what would you like to see us change about the Sprint Review to make them more useful to you?”

In most cases, the most important feedback is provided by customers of the product. Feedback from internal stakeholders can be around the functionality but more importantly is about making sure the team is addressing the technical and directional things they should be and, if they aren’t, determining how they can be helped to do the right thing. Internal stakeholders also help increase team engagement by being appreciative of the work being done by the team. This means that internal (especially management) stakeholders should be proactive about responding to what they are seeing. Feedback could be used to stress certain expectations, for example:

  • “How do you feel about the quality of the work you have done?”
  • “What is your level of automatic test coverage at the moment?”
  • “How do you know?”

If you have nothing to input at least be explicit about what you think was good (eg “After seeing the demo, I cannot come up with anything I would have changed or done differently. BTW I really liked the way the user interface came together here.”)

The general principle of product feedback is to do whatever it takes to allow feedback on your product with minimal friction, within the shortest time-frame possible and so that the feedback is received in context. The Sprint Review is just one opportunity to get this. Other ways to get increased product feedback include:

  • Not everyone can be involved in every Sprint Review and there maybe situations where a you need another “review” session. For example, field / customer leadership might not be able to be involved in each and every sprint review but we would like their input. One idea is to set up a special event for this. We are seeing, for example, PSI (Potentially Shippable Increment) Review where demos are pulled together that combine the work of multiple teams over multiple sprints. The downside is that this feedback loop is longer than if they were providing feedback at the Sprint Review and the people involved need to understand that our ability to change course will be slower. At least the feedback comes in before the product is released.
  • One that we probably don’t use enough of is feedback directly from a user on one of our screens to the product owner. For example Atlassian puts a “Feedback” button on all screens in their product which then send information back to the product owner. The context defined by screen user is on and in the background the tool gathers up information on configuration. The form is very simple (summary, description and optional attachments) so there is no friction to the user to provide the information. In particular, the form does not ask any “who” information – it is more important to get the feedback than to understand the specific person it came from. Again the product feedback cycle is longer than what we’d do with a Sprint Review but it still helps inform decisions about the product.
/home/hpsamios/hanssamios.com/dokuwiki/data/attic/how_can_we_improve_the_quality_of_feedback_at_an_iteration_demo.1538675706.txt.gz · Last modified: 2020/06/02 14:32 (external edit)