Circling in on XP practices

Almost a year and a half ago I started a journey with a team.  I wanted to embed myself in XP practices.  I wanted to learn how things looked so I could maybe one day be better at helping other teams adopt some of those practices. Below are some observations about my own learning and that of the team’s adoption.

You may notice the use of “I wanted”. The team didn’t necessarily want any of this.  They simply were writing software.  Getting frustrated with writing the tests they were writing.  But they were happy enough.

Some of the things that I’ve learnt

– TDD is hard

– TDD isn’t adopted unless a 1:1 mentoring process happens on the real code that is being worked on.  Understanding is very important.  But showing how to apply it on real code that is part of the sprint delivery is even more important.  Showing that someone else (who hopefully has an idea of what they are doing) also runs into pitfalls on how a legacy code base is designed for testability – and showing some patterns to work around those problems – all while coding for a real deliverable on the system helps embed understanding and buy in to trying.

– TDD isn’t adopted unless at least one person champions it and the majority of the team are willing to be influenced and open to try.  One person championing it and no one else caring or trying will fail.  The more people who buy in and can help pair and show the process the higher the probability of success.

– Pairing while continuing to adopt TDD is very useful.  Your partner helps keep you honest.  You can discuss how to test what you’re trying to do.  You have a shared cognitive experience and learn together.

– Code coverage can help you understand if someone is trying to do TDD or not.  If there is none – it is obvious that TDD isn’t being done.  That can help to spark conversations. (But of course, code coverage numbers aren’t important – none, some and lots are what is.  And anything more than none doesn’t mean TDD is happening, just tests, but that is better than no tests!)

– Pairing is hard.

– Pairing with juniors who know the domain can be very useful.  Having two thoughts on the design and the requirements in order to validate understanding, to think out loud and even model out loud can be very valuable.  Pairing with someone completely new can however be frustrating for all sides.  Then it becomes pairing for learning, which is a different way of working again.

– It can be too easy as an architect to not pair as there is other non-sprint work to be researched, meetings to attend, code reviews to be done.  I personally need to pair more and find ways to not do “other” work.  Or to do it as a pair.

– Design is hard.

– Better designs will emerge by the need to test due to the need to decouple.   More cohesive designs may emerge if you’re careful.  But that requires understanding of what that means and looks like.  Doing TDD doesn’t lead to good design.  Being able to see the patterns in the code and to extract them into a good design leads to good design.  TDD gives you the time and freedom to do that.  But if your team can’t see it then TDD will only help you with your test coverage…

– Conversations about design will emerge and better architectures can be adopted that keep the code simpler.  Keeping the code simpler makes it easier for everyone.  The design remains simpler.  It is easier to not make a big mess as you’re watching for it getting complicated… if you’re designing.

– We don’t do continuous integration.  Most people in the team don’t care about a build on commit.  No one sees the burning need to deploy automatically. I haven’t pushed it as no pressing problems exist that would obviously be fixed by introducing CI.  I’d rather finding places where the need could be more easily validated and leverage those.  That said – a recent deployment showed up the desire to have an automated deployment for much faster turnaround on testing a new build so maybe there is a place to get people more interested and perhaps when that is there the understanding and desire to keep it going will grow.

How did we get here?

Over the last almost year and half I’ve shared my views on pairing, TDD, refactoring, agility in code – principles like the simplest thing, fast feedback and emergent design.  Some of them the team have chosen to try.  Some have reacted very negatively to some of the ideas.  I haven’t pushed any too hard.  I’ve led by example doing it myself. I’ve put effort into explaining the reasoning and thinking behind ideas.  I’ve gained a deeper understanding of why I do things.  And the ability to question the way I do things at all times based on a desire to optimise the agile values and principles, not worship the agile practices.

As a result of this we’ve circled in from testing sort of, after the fact, because we were told to.  Around to understanding how to test, what to test, and how to test first – but not necessarily testing first.  To pairing – the practice most quickly adopted – and with me driving it less.  To now having almost everyone TDDing – and those that aren’t doing that or pairing are starting to visibly fall behind in skills.

I do not think any of this would have been possible without someone showing and explaining from an embedded and trusted state in the team.  Outside coaching or training doesn’t stick.  I’ve talked to several people who’ve trained TDD and don’t do it at all.  I’ve tried for a couple of years to coach people into TDD as a ScrumMaster and PM with many attempts and zero success.  Embedding and mentoring can succeed.  But it takes a long time.

Where to from here?

I don’t know how much further this team will manage to go into XP or any other agile practices.  But I do already know they have come further than I imagined a year ago.  And that is awesome.  Hopefully we’ll continue to circle in on good XP, better agile practices and possibly – more importantly – get some really good design going as well.  The scene is now set to allow that to happen.

Advertisements

The same things don’t always work

Over my years of getting to know Scrum and the agile way of working, I have experimented with a lot of things.  I have found things that didn’t work and I have found things that did.  I’ve kept the things that did work and tweaked them as needed.  They were good tools for me and informed my thinking around how I succeeded using Scrum.

Then I moved jobs.  I took my toolset with me.  And I tried to use my same logic and thinking.  And people heard my words and too often for my liking interpreted them to mean something completely different.  It was very educational and taught me very strongly that The Same Things Do Not Always Work.

Context is King.
If you’ve built up context around certain ways of working then people know how you got there as they were there with you.  They understand.  They emote.  And when you bring those ideas fully formed into another organisation that have strangely not lived in your head for the last couple of years, they don’t necessarily immediately understand or emote.  And this isn’t their fault…

My failure has been in not understanding that my tool set held the tools that I had decided upon by applying the principles that I understood and in order to use the same tools at a new organisation I had to first back away a little and bring out the principles again to see if those same tools would still uphold those principles in this new organisation.

A simple example: Ship when you’re ready
For more than a year I had been working with a team delivering software which was officially shipped on days that weren’t the sprint boundary.  The team were fine with this.  We always aimed to finish before the sprint that we shipped in.  If we could plan it on the boundary we would, but sometimes it didn’t work out that way – and it didn’t matter.  We were completely fine shipping when we were ready – instead of waiting for an arbitrary date boundary for the sprint end.  Everyone was good.  It worked well.  It felt obvious.

Obviously if there was a large amount of work to do and you asked the team to commit, they can’t commit to earlier than a sprint length.  But – if we’re done, we’ll ship – why wait?

And then it didn’t work…
Fast forward to a new organisation.  We have some work to complete.  We have a ship date.  So we discuss and using the previous pattern from my tool set I suggest – if we’re ready we’ll ship.  If we’re not, we won’t.  Somehow this was interpreted down the lines as: we are going to change the sprint length to 1 week and people will deliver by the deadline or else.

What was the simple “if we’re ready we do it, if we’re not, we don’t” turned into an angsty changing the sprint cadence rush to complete.  But that was the organisation’s interpretation of what was, for me, a clear and obvious way of working.

Which made me think
When going into a new organisation – go back to basics.  Say no to all the broken rules – until you know which ones you can safely break without someone abusing the situation.

Another example: Velocity
For several years I had been using velocity and planning stories in a reasonably reliable fashion. The teams I had worked with weren’t highly passionate about velocity but were focused on the work at hand and usually knew what the next sprint or two held and were willing to push their capacity to try and achieve more points in a sustainable way.  The combination of measurement (to aid planning – and replanning every sprint) with knowing what you’re doing for the short term helped ensure that the team was both productive and reasonably predictable.  This was great for building trust with stakeholders who had legitimate concerns about delivery in the past and it also enabled us to go a faster.

And then it felt pointless…
Fast forward to a new organisation. No sizing. No sprints. So I enthusiastically said we should try a little Scrum.  So now we do a little Scrum.  But we don’t use the velocity or plan beyond the current sprint.  And it works.  And no one actually is worried.  And the stakeholders are okay with everything.  And everything is roses.  So why measure?  And why plan?  When you can be agile and make it up each sprint – because the work is still known in a reasonable fashion.  And we’re as successful as is required of us.

The tool set that I brought with me didn’t result in the changes that I anticipated and in fact possible adds little value right now. I suspect many reasons for that.

Which made me think
Scrum isn’t just a framework.  Context remains King.  You can’t just walk in and apply your learning from another context to the new one without understanding the context and working with the people who are in it.  That doesn’t mean you can’t ask questions and make suggestions – but do just that – rather than judging too early.   Agile is about principles – these lead you to the learning and the tool set.  Always go back to the principles and the spirit and validate – particularly when approaching a new team or new organisation with your existing experience.  Unless, of course, you have the remit to cause revolutionary change.  In which case, go wild!

And there is a point
My failures over the last year have fuelled much introspection and learning.  I’ve opened myself up to question myself around my understanding of Scrum and agile.  And I’ve seen very clearly how no one size fits all.  I have found this a powerful learning experience.  Seeing what you know does work not working any more deepens ones understanding of what it is that you’re really doing.  I’m thankful for these new experiences that have allowed me to grow a deeper understanding of what has worked by understanding why it hasn’t worked as well.

I now hope I keep remembering to not apply my tool set too soon in the future in the hopes that I’ll more effectively apply it with a deeper understanding of the actual context.  Or perhaps I’ll find a more universal tool set to apply.

Exceptional Teams need Exceptional Practices

The November SUGSA event featured Austin Fagan talking about what makes an exceptional team.  A collection of exceptional people doesn’t necessarily make an exceptional team – and Austin posited that the usage of exceptional practices is what can turn a collection of individuals into an exceptional team.

Having experienced a team that is definitely more than the sum of the individuals – I can emote to the sentiment.  And equivalently having worked with some exceptional individuals in the distant past – there wasn’t a great deal of real team work.  Collaboration and other solid development practices make a huge difference.

The evening included a sequence of “does your team do X” slides – sit down if no, stand up if yes.  I was glad that I could at least stay standing for many of the soft skills – empowered team, collaborating, working as a unit, etc.  But sadly as always the practices of software development like pairing or TDD are lacking in adoption in my personal experience.

The evening proceeded with a session where we broke into groups and then discussed a topic.  We landed with pairing and were looking at issues around what is stopping you from starting to pair today as well as how do you keep pairing when you have started.

Pairing
Pairing is an interesting thing to me in terms of adoption but it was refreshing having the conversation with some people who are dedicated agile people all coming to the same conclusion.  Our summary was something along the lines of  “It could be awesome, we don’t really know, so we should try it to find out”.

The standard pros came up – continuous knowledge sharing, no dependencies on a single person, continuous code review (or at least code discussion/justification which should make better code and sanity checking of code), potentially more productivity (can’t work avoid on the internet in a pair as easily).

The standard cons also came up – it wastes time, you need to synchronise office hours for pairs, some developers can only work alone, how do you know it works, we’re changing the way you work while having no evidence that it really does work.

Evidence please…
Sadly in the world of development practices there isn’t a lot of evidence in the public domain for software practices such as pairing – just a lot of anecdote that it is the Agile Way.  I believe there is some evidence on code reviews – which lends some credit to pairing being beneficial – but not much that anyone sites emphatically to prove that pairing delivers better software, faster, more bug free, or some other quantitative measure in comparison to not pairing, or pairing just some of the time.  Though I suspect in the world of delivering business software not too many are going to provide the data for the experiments.  I know there is some evidence for certain design practices – such as DDD – to spread the logic and hence the complexity across all the objects hence reducing the maximum complexity of any one object.

Pairing adoption is a tough one.  I never did much pairing in my time as a developer or architect.  I now wonder if I should go back to learn how so that I can be more emphatic about it one way or another.  In fact, I’ve never experience anyone really pairing – by the definition of doing it all day, 1 machine between 2 people, writing code (and tests).  Obviously developers pair in order to solve hard problems sometimes, or when there is a slog of work to get through which is more effective with one person typing and one person following the documentation for implementation (read really big CRUD screen), or when their machine is acting up and there is nothing else to do. But I haven’t worked with any developer who actively wanted to pair on a regular continuous basis.  I’ve met many who vehemently dislike the idea.  And some who’ve tried it and discarded it.

Pairing Adoption
We dug into this a bit and all agreed the biggest issue around pairing is “What is pairing” – what are the pairing patterns – and how do you do it right.  I’ve recently read this InfoQ article which talks to pairing as a pattern.  It is an interesting read if you’re trying to think through the options – but it’s by no means the answer.

Doing it wrong is going to be detrimental to your ability to adopt pairing as it’ll give those who “know” it isn’t going to work more ammunition to inspect and adapt away from pairing.  The key things seem to be training and experience – working with an experienced pair programmer to understand how it works.  That is a tough one as I suspect it’ll take a reasonable amount of time immersed in the performance of pairing to learn what the steps are automatically. Pairing seems to be one of those things, like TDD, that possibly is a performance art that one can only learn properly by seeing it working properly in order to reach those “Ahah!” moments.

TDD
TDD on the other hand – or at least automated testing – is a lot easier to sell in terms of why to do something like it.  I’ve spent a bunch of time pondering pairing and TDD and other practices.  Pairing as described above I find harder to sell. TDD on the other hand – or BDD, test first – even test last – I’m more than confident to sing the praises and provide the high level problems that we’re solving.  I want to be able to release without doing a full manual regression test over the next month – how do I do this?  I’d like to be able to do a quality release every 2 weeks – how do I do this?

I worked on system that we released every 2 weeks to production that had no testers and a full suite of automated tests – it was incredibly successful at deploying often.  At this organisation I mostly did test last but it was always very valuable doing the testing.  I would do it test first if I were to do it again as I think it might be faster than test last and the capability to test your design as you write it seems appealing.  I did a little bit of test first at the start of my time at my current organisation as it allowed me to determine how I wanted to call the code and that defined how I built the outer bits.  That was very valuable.  It didn’t last though due to my lack of dedication and shifting roles – though the original tests that I wrote were still being used to refactor that code 2 years later and saving the developer’s bacon.

Automated testing adoption
The key failure that I’ve been finding in adopting automated testing – TDD or otherwise – has been in the lots of little gotchas when the system isn’t as testable as you’d like.  In the organisation where I was successful with this we were using Perl and could simply modify functions on the fly in tests.  It was awesome, but really dodgy from a purist computer scientist point of view.  In .NET this is harder.   And if you don’t plan for testing in your architecture up front, retrofitting it is a pain.

Fundamentally my experience with automated testing is that developers can be convinced of the value and can get excited about getting it done and including it into every build.  But each time you hit another large wall in the way of getting a new type of problem solved in a sustainable and non-fragile way the closer the enthusiasm comes to falling away.  The frustration seeps in and the dedication seeps out.  That is sad.  And it is what makes TDD and any type of automated testing hard to sustain.

Are we doomed to fail at these practices?
I don’t think we’re doomed to fail at these practices.  But I do think that we need to think long and hard as to why we’re doing some of them so that we can continue to keep the enthusiasm and dedication up in doing them.

Fundamentally the software developers I’ve worked with over the years – myself included – have a lot to learn about how to make these practices really work for them and the organisations they work for.  I’m hoping someday I’ll have learnt some more and will then be able to help others to understand and justify to themselves why these are awesome practices.  I’m looking forward to that day – as then I should be working with exceptional practices in exceptional teams hopefully also with really exceptional people.  And it will be beautiful.

Interesting Times – A Positive Story

In June 2010 I took over a project with a reasonable amount of trepidation.  There were a couple of reasons for my reservations.  Not the least of which was my search for what I wanted to be doing and taking this on would be a commitment that I wasn’t sure I wanted to make.  I made the commitment and over a year later I’m incredibly happy with what we have achieved.

When I took over the project several other project managers with far more experience had not succeeded in making a real success of the project.  Technically it wasn’t beautiful. After having taken on a large client performance issues had arisen which had taken several months to fully bed down.  I was taking it over at the tail end of this.  And there were other issues that had led to the client being at a low point in the trust of the team.  Though they had great faith in certain individuals but that was part of the problem.

The first thing we started doing was introducing Scrum.  This exposed a couple of problems:

  1. The team “knew how to do it all” already.  (They didn’t)
  2. Some individuals couldn’t break anything down beyond 3 days – and objected to questions around this.
  3. It was almost impossible to lock down scope for a week – let alone 2 weeks so sprints were difficult.  There was a lot of smoke but few real fires to put out.
  4. It wasn’t clear what exactly needed doing to deliver a quality release.
  5. We couldn’t deliver a quality release due to lingering quality issues from an earlier period.
  6. We had no PO.
  7. The client had experienced working with me on another Scrum project and initially assumed that it was me that was important – not the process.  They were impatient for success.

Our first big release after the performance issues were fixed was a disaster.  1 month of work to “finish” the last parts of a module generated 2 months of work to get it really done.  This was largely due to those who in theory knew what needed to be done didn’t actually know what needed to be done – both in the team and at the client.

Jump forward 6 months. Some things changed.  Some people changed.  I took on the role of SM and PO in order to force the backlog generation and to get a handle on the chaos.  We made a quality release.  The team was much more enthusiastic and fired up.  And for the rest of the year we’ve gone from strength to strength.

How was this possible?

a. My management supported me.  I beat the PO drum over and over and over again.  We needed one.  I would do it in order to achieve a manageable backlog, but we needed one from the client.  In February – after this first quality release – we had our PO and this has taken us from strength to strength and proven the value of having a known backlog to groom and work on.

b. We focused on what we knew we had to do and made sure the client knew what we had to do.  We embraced change, but made sure that the client understood change wasn’t free, dates or other scope could be affected.  We were completely transparent with this.

c. The new team became a real team and the team started to deliver greater than the sum of the individuals.

d. We removed the ambiguity.  Initially this was by going a bit more big design up front in order to force the client to stop randomly changing direction and blaming the lack of delivery on the team.  This is something we’re now looking at changing in order to remove some of the waste around this.

e. The team had quality issues that they worked hard at resolving – through bug analysis and retrospecting every time there was a problem.

Moving forward we started unit testing properly.  We haven’t conquered everything in the system to be unit testable, but we’ve made a good start and keep on opening up new things to try or fix.  But the team is positive.  They have tackled quality in deployments and come up with solutions to ensure we don’t miss things.  They have been proactive and keen to get better.  The team takes pride in solving the issues that are raised up in retrospectives.

We have been able to move to 2 week sprints.  We have been able to manage support in an effective way and reduce it to a minimum as much as possible.  We have a full time PO and we can plan reliable deliverables with known scope given enough detail and time.  We have become solidly predictable with high quality.

Fast forward to last week. I’ve just come out of a client visit. They are keen to keep pushing the envelope – to become better, faster, more agile, more productive.  They are keen to try more frequent deployments and to force the team to think about how to achieve that.  They want the flexibility and the ability to get software live faster safely if needed.  They are keen for the problems with deploying more often to rise up so that we can confront them.  They trust that we want to do better.  And they want to not limit us from getting better.  And they know that if we get better we’ll get faster, once we’ve learnt how do to some of these things – like being able to deploy every month.  Hence maybe needing a full test suite and investing in that.  That trade-off will be interesting to see how it comes to a head.

We’re also looking at making ourselves more effective by embracing change more – not worrying as much about specs and sign off which were all needed to rebuild the trust – but to start being more agile in requirements generation together as a team with the PO.  Of course with embracing more ambiguity comes less rigid date based plans but deploying often will help us not need that at all.

This is all quite a step from where we started over a year ago.  We now have a client encouraging and supporting the team to look at how better to be agile – to solve some of their real world problems that they are seeing.  They trust that we can achieve more than we’ve already achieved.

There are interesting times ahead for this team.  It is a pity I won’t be around to see this next year.  But I’ll be watching from the side-lines cheering them on to be more agile and to push the envelope of their capabilities.