Retrospective talk

Last TEK day I did a presentation on retrospectives in order to generate discussion and deeper understanding in my company and team around what this thing is supposed to be about.

I’ve had some good feedback since then from people who were initially resistant to retrospectives.  However there is still lots of work to be done – as always in this space 🙂

Attached is the presentation that I talked from.  Though mostly this is bullet points that fire ideas.  Perhaps someone else will find this useful.

Retrospectives: A focus on getting better

Advertisements

Retrospective Analytics – Sizing and Bugs

A Problem with Size?

In How do you really know enough I mentioned that one of my teams was gathering data around sizing.  This came out of a retrospective 4 sprints back where the team were a bit frustrated about their sizing.  They decided to track the information to determine if there was anything to learn.  In our last retrospective we looked at this data.

Problem: The team felt that their relative sizing wasn’t accurate enough and that they wanted to take a look at it.

Background: We had just gone through a process of doing a bunch of documentation for a web service.  I suspect this was a key reason for thinking this was an issue along with one or two other outliers.  Basically the team were doing activities they weren’t used to doing and were under estimating the impact of generating the Sandcastle documentation – both from a new technology that they were adopting as well as the back and forth communication that seems to always occur when getting some document to be good enough.  Fundamentally the DoD for the documentation wasn’t as clear as for coding a story and required a lot more communication overhead.

Solution: Gather data and analyse it.  One team member signed up to gather the data and when the team felt they had enough we would then analyse the data.

Outcomes: We analysed the data in our last retrospective.  The team looked at the data and determined that across all the sizing for the last 4 sprints about 70% was spot on and a small percentage was either 1 size down or 1 size up.  There were no clear outliers outside of 1 size difference.  This was interesting and the team now could determine that actually they were doing a pretty good job.

So going back the original problem – the team determined that it was more the outliers of new technologies or new types of work – such as adopting a tool such as Sandcastle or doing work that you’re less used to sizing – such as writing documentation – that was more the root cause of the sizing frustration.  This was also something that we didn’t do much of in the sprints that we were measuring.

End result:  We’re pretty good at sizing and really we shouldn’t be sweating it.  But we need to be aware of underestimating the effort needed for work that we aren’t familiar with.   And only really if we also need to provide date based planning on the velocity/commitment that we hope to acheive.

Buggy or Not Buggy?

In this same retrospective we did a bug analysis to see if we could learn something around our bug count in terms of improvements.  We’ve done this a twice already in the last year.  The first two were useful at generating ideas.  When we did the first one we were in a bit of a state and were trying to understand why exactly.  I was interested to see where this one would go.  Again this was driven by the team wanting to take a look again.

Analysis: We do all our internal bugs as red stickies on the board.  This makes it visible and we don’t waste time in Jira dealing with bugs but rather with the visible board that the entire development team can see.  In order to do a bug analysis we gather the issues from the last X sprints, create several buckets that we think the bugs could fall into (technical debt, UI, functionality, DB, process, etc) and add / remove buckets if needed as the process continues.

We then do the equivalent of affinity estimation / magic estimation for bugs.  Each team member gets a selection of bugs and they place them in the relevant buckets – or create a new one.  We don’t do the silence thing seems as some of the bugs are a little opaque in their wording (or hand writing) so sometimes you need to ask for reminders about the issues.  After the first pass everyone sees where everyone else put the bugs to determine if they agree – and so things move around.

Once equilibrium has been found we then break the buckets into critical and non-critical and generate insights as to what the data is telling us.

Outcomes: As it turns out we have quite a lot of issues that are UI issues – but many of them are changes but we still mark them as bugs.  We resolved to not do that moving forward so that this would be more obvious as to real bugs vs. change that we’re happy to do.

We also determined that we weren’t learning anything new.  One of the outcomes of a previous bug analysis was to do more developer testing to ensure the quality of the story.  This was brought up again and the team resolved to do more of it again – but that is a less quantifiable action – test harder / more.  Sure we’ll try.

End result: We determine that the number of bugs coming up in sprint was an acceptable level.  If none were being found, that might be worrying in terms of what did it mean.  The team were dealing with them and things weren’t being left undone.

We also determined that the more interesting thing would be to look at a bug analysis of bugs that have made it into the wild after our next release.  Those are bugs we did not find – instead of did – and it might be far more interesting to analyse those holes.  The things we have to look forward to 🙂

What? No Actions?

So the retrospective ended with the team feeling good that actually the world was pretty good.  This was great.

The devs then sat down after the retro and generated actions to attempt to get our unit testing in the current module working for the next sprint as that has been an ongoing retro action that isn’t getting traction due to some limitations that take time to work around and the team aren’t seeming to get these solved with baby steps – but more on that later.

So yes – still actions in this self organising team 🙂

Scrum Gathering inspiration

I had a great Scrum Gathering this year.  In particular I’m really happy that we did lightning talks as the two biggest things that sparked my learning were from Cara on setting achievable goals for retros and Carlo about reviews – both of
which were lightning talks.

Setting achievable goals
Retros rock.   But sometimes they do become stagnant. Sometimes that is because the same stuff comes up all the time.  But also often it is because nothing actually is getting done.  Cara gave a really great demonstration of how to make SMART actions and make them meaningful and actionable – and even to raise up the impediments to getting the action done so that they can be apparent up front as well.  Take a look at the video at http://vimeo.com/29257791 .  If your retros aren’t getting results, really do consider giving this a try.  It is obvious – but most of us aren’t doing it.

Your sprint review sux
Reviews have troubled me in the past.  We’re better at them now.  But for a long while they weren’t very effective.  So I’m interested in practical ideas around reviews.  I sat in on a Scrum Clinic topic about making reviews more interesting and got a couple of ideas to try – but Carlo’s lightning talk on reviews was yet another way to look at reviews.  I really like the concept of looking at the product with your product owner as a team looking for ways to make improvements rather than just looking at what we just did in the last sprint.  We’ve almost been doing this in our last couple of reviews and I’m looking forward to making it more and more relevant.  Thanks Carlo for the eye opener.  And of course the concept that maybe the CEO in the review should be something else – like a demo every 2 or more sprints – separate from the reviews. The sprint review should be a safe place for the team to agree and discuss about on what to do next.  Check out the video at http://vimeo.com/29253187.

Other thoughts…
There are two other things that have been bugging me for a while – from opposite spectrums of agile challenges.  What are agile software development practices and what is an agile company really.  Unfortunately I got some tastes of these topics at the gathering, but nothing really meaty to chew on.

I sat in on a couple of the software dev talks looking for insight and inspiration and held a couple of discussions that gave me some ideas.  But for me this is one of the harder aspects of agility.  Scrum just organises the way you work.  Agile software dev practices like XP and using DDD, real OO design and architecture to achieve agility in code actually change the way you write code and even think about how you do your job.  And developers are a lot pricklier about changing that than simply the organisational style around it.

The other end of the spectrum is what makes a company agile.  And what are the things that make your company not agile.  Boris’s management by constraints hit a chord for me.  I sat in a couple of other talks but didn’t get struck by new big insights.  Since then I’ve been reading some of Ester Derby’s blog and some ideas are formulating – both from a real problem statement and a hypothesis as to how to solve the problem.  I’ll write more on this at another time as my mind ticks over it.

Overall this was the best Gathering in Cape Town that I’ve attended.  I’ve been to all three – and I’m not at all biased due to the fact that I was involved in the organisation of this one.  Really! :).  I got a lot out of it.  So thanks to all those who helped make it an awesome success.