Dealing with Bugs

The below diagram is the basic flow that I generally adhere to when talking about when should a bug get fixed or not. This is pretty obvious – and far easier to draw than to write out.

Bug flowchart

There are of course some subtleties – the devil often being in those details.

Analysis – this is all about analysis.  Sometimes it is easier to do.  Sometimes someone needs to do some digging to work out the details.  That doesn’t change the decision flow – just that you might need to do more work to get to the decision.

Bug definition – internal issues generally really are bugs as there is usually higher communication around what features and changes are and how to handle those within the team.  For external issues they could be changes, features or plain old bugs.  I assume that that analysis is already done or becomes clear during the process.  No matter what type of issue it is – they are all subject to the same decisions.  It is often easier to put a feature/change straight onto the backlog for later planning.

The problems with not fixing bugs now:

  • One small bug not fixed now isn’t a problem. 100 is.  Don’t naively grow the debt by only looking at the current issue in isolation.
  • Not fixing 10 bugs may signify a more rapid decline in not fixing other issues as it gets easier and easier to justify not fixing something.
  • Not fixing an issue now may underlie a bigger problem.
  • Hanging on to issues that you choose not to fix immediately grows the backlog with low priority issues.  Generally these low priority issues never get prioritised because… they’re low priority!  Be aware of simply growing a long list of things to remember that you’ll always ignore.
  • Make sure that if you decide not to fix something that it doesn’t keep coming back.  This is even more pertinent if the team find it and the PO decides to fix it later or never but then when someone else finds it nearer your release date – for instance in a hardening sprint – and now it needs fixing.  This shows a problem with the initial decision and will cause frustration for all.

Points for bugs – if I plan a bug for a future release I generally consider it a change and want it to be sized.  We’re sizing the effort for the release so we might as well get as much data as possible.  If you’re fixing the bug before the release then at most size it to determine how much you can take into the sprint but don’t take points for it as the number of bugs you’re creating is part of your velocity and the real velocity should drop due to work that isn’t properly done being done again.  The team should acknowledge that.

Learn from the feedback – as in any agile environment – look at what is happening and try to retrospect on why the issue happened and how we could avoid it happening in the future.  What documentation or details could have been shared with the client so that they could have known that what they were asking for isn’t a bug?  What did we miss in our own process that caused it to be a bug?  Sometimes it is too easy to say it was some once off thing but many once off things could also have the same root cause – or the same solution.  Dig a little deeper if everyone simply thinks – well that was due to this one deployment specific issue that now we’ve gone through – it will never happen again.  Because it probably will happen again – in some other form.

Advertisements

Retrospective Analytics – Sizing and Bugs

A Problem with Size?

In How do you really know enough I mentioned that one of my teams was gathering data around sizing.  This came out of a retrospective 4 sprints back where the team were a bit frustrated about their sizing.  They decided to track the information to determine if there was anything to learn.  In our last retrospective we looked at this data.

Problem: The team felt that their relative sizing wasn’t accurate enough and that they wanted to take a look at it.

Background: We had just gone through a process of doing a bunch of documentation for a web service.  I suspect this was a key reason for thinking this was an issue along with one or two other outliers.  Basically the team were doing activities they weren’t used to doing and were under estimating the impact of generating the Sandcastle documentation – both from a new technology that they were adopting as well as the back and forth communication that seems to always occur when getting some document to be good enough.  Fundamentally the DoD for the documentation wasn’t as clear as for coding a story and required a lot more communication overhead.

Solution: Gather data and analyse it.  One team member signed up to gather the data and when the team felt they had enough we would then analyse the data.

Outcomes: We analysed the data in our last retrospective.  The team looked at the data and determined that across all the sizing for the last 4 sprints about 70% was spot on and a small percentage was either 1 size down or 1 size up.  There were no clear outliers outside of 1 size difference.  This was interesting and the team now could determine that actually they were doing a pretty good job.

So going back the original problem – the team determined that it was more the outliers of new technologies or new types of work – such as adopting a tool such as Sandcastle or doing work that you’re less used to sizing – such as writing documentation – that was more the root cause of the sizing frustration.  This was also something that we didn’t do much of in the sprints that we were measuring.

End result:  We’re pretty good at sizing and really we shouldn’t be sweating it.  But we need to be aware of underestimating the effort needed for work that we aren’t familiar with.   And only really if we also need to provide date based planning on the velocity/commitment that we hope to acheive.

Buggy or Not Buggy?

In this same retrospective we did a bug analysis to see if we could learn something around our bug count in terms of improvements.  We’ve done this a twice already in the last year.  The first two were useful at generating ideas.  When we did the first one we were in a bit of a state and were trying to understand why exactly.  I was interested to see where this one would go.  Again this was driven by the team wanting to take a look again.

Analysis: We do all our internal bugs as red stickies on the board.  This makes it visible and we don’t waste time in Jira dealing with bugs but rather with the visible board that the entire development team can see.  In order to do a bug analysis we gather the issues from the last X sprints, create several buckets that we think the bugs could fall into (technical debt, UI, functionality, DB, process, etc) and add / remove buckets if needed as the process continues.

We then do the equivalent of affinity estimation / magic estimation for bugs.  Each team member gets a selection of bugs and they place them in the relevant buckets – or create a new one.  We don’t do the silence thing seems as some of the bugs are a little opaque in their wording (or hand writing) so sometimes you need to ask for reminders about the issues.  After the first pass everyone sees where everyone else put the bugs to determine if they agree – and so things move around.

Once equilibrium has been found we then break the buckets into critical and non-critical and generate insights as to what the data is telling us.

Outcomes: As it turns out we have quite a lot of issues that are UI issues – but many of them are changes but we still mark them as bugs.  We resolved to not do that moving forward so that this would be more obvious as to real bugs vs. change that we’re happy to do.

We also determined that we weren’t learning anything new.  One of the outcomes of a previous bug analysis was to do more developer testing to ensure the quality of the story.  This was brought up again and the team resolved to do more of it again – but that is a less quantifiable action – test harder / more.  Sure we’ll try.

End result: We determine that the number of bugs coming up in sprint was an acceptable level.  If none were being found, that might be worrying in terms of what did it mean.  The team were dealing with them and things weren’t being left undone.

We also determined that the more interesting thing would be to look at a bug analysis of bugs that have made it into the wild after our next release.  Those are bugs we did not find – instead of did – and it might be far more interesting to analyse those holes.  The things we have to look forward to 🙂

What? No Actions?

So the retrospective ended with the team feeling good that actually the world was pretty good.  This was great.

The devs then sat down after the retro and generated actions to attempt to get our unit testing in the current module working for the next sprint as that has been an ongoing retro action that isn’t getting traction due to some limitations that take time to work around and the team aren’t seeming to get these solved with baby steps – but more on that later.

So yes – still actions in this self organising team 🙂