Circling in on XP practices

Almost a year and a half ago I started a journey with a team.  I wanted to embed myself in XP practices.  I wanted to learn how things looked so I could maybe one day be better at helping other teams adopt some of those practices. Below are some observations about my own learning and that of the team’s adoption.

You may notice the use of “I wanted”. The team didn’t necessarily want any of this.  They simply were writing software.  Getting frustrated with writing the tests they were writing.  But they were happy enough.

Some of the things that I’ve learnt

– TDD is hard

– TDD isn’t adopted unless a 1:1 mentoring process happens on the real code that is being worked on.  Understanding is very important.  But showing how to apply it on real code that is part of the sprint delivery is even more important.  Showing that someone else (who hopefully has an idea of what they are doing) also runs into pitfalls on how a legacy code base is designed for testability – and showing some patterns to work around those problems – all while coding for a real deliverable on the system helps embed understanding and buy in to trying.

– TDD isn’t adopted unless at least one person champions it and the majority of the team are willing to be influenced and open to try.  One person championing it and no one else caring or trying will fail.  The more people who buy in and can help pair and show the process the higher the probability of success.

– Pairing while continuing to adopt TDD is very useful.  Your partner helps keep you honest.  You can discuss how to test what you’re trying to do.  You have a shared cognitive experience and learn together.

– Code coverage can help you understand if someone is trying to do TDD or not.  If there is none – it is obvious that TDD isn’t being done.  That can help to spark conversations. (But of course, code coverage numbers aren’t important – none, some and lots are what is.  And anything more than none doesn’t mean TDD is happening, just tests, but that is better than no tests!)

– Pairing is hard.

– Pairing with juniors who know the domain can be very useful.  Having two thoughts on the design and the requirements in order to validate understanding, to think out loud and even model out loud can be very valuable.  Pairing with someone completely new can however be frustrating for all sides.  Then it becomes pairing for learning, which is a different way of working again.

– It can be too easy as an architect to not pair as there is other non-sprint work to be researched, meetings to attend, code reviews to be done.  I personally need to pair more and find ways to not do “other” work.  Or to do it as a pair.

– Design is hard.

– Better designs will emerge by the need to test due to the need to decouple.   More cohesive designs may emerge if you’re careful.  But that requires understanding of what that means and looks like.  Doing TDD doesn’t lead to good design.  Being able to see the patterns in the code and to extract them into a good design leads to good design.  TDD gives you the time and freedom to do that.  But if your team can’t see it then TDD will only help you with your test coverage…

– Conversations about design will emerge and better architectures can be adopted that keep the code simpler.  Keeping the code simpler makes it easier for everyone.  The design remains simpler.  It is easier to not make a big mess as you’re watching for it getting complicated… if you’re designing.

– We don’t do continuous integration.  Most people in the team don’t care about a build on commit.  No one sees the burning need to deploy automatically. I haven’t pushed it as no pressing problems exist that would obviously be fixed by introducing CI.  I’d rather finding places where the need could be more easily validated and leverage those.  That said – a recent deployment showed up the desire to have an automated deployment for much faster turnaround on testing a new build so maybe there is a place to get people more interested and perhaps when that is there the understanding and desire to keep it going will grow.

How did we get here?

Over the last almost year and half I’ve shared my views on pairing, TDD, refactoring, agility in code – principles like the simplest thing, fast feedback and emergent design.  Some of them the team have chosen to try.  Some have reacted very negatively to some of the ideas.  I haven’t pushed any too hard.  I’ve led by example doing it myself. I’ve put effort into explaining the reasoning and thinking behind ideas.  I’ve gained a deeper understanding of why I do things.  And the ability to question the way I do things at all times based on a desire to optimise the agile values and principles, not worship the agile practices.

As a result of this we’ve circled in from testing sort of, after the fact, because we were told to.  Around to understanding how to test, what to test, and how to test first – but not necessarily testing first.  To pairing – the practice most quickly adopted – and with me driving it less.  To now having almost everyone TDDing – and those that aren’t doing that or pairing are starting to visibly fall behind in skills.

I do not think any of this would have been possible without someone showing and explaining from an embedded and trusted state in the team.  Outside coaching or training doesn’t stick.  I’ve talked to several people who’ve trained TDD and don’t do it at all.  I’ve tried for a couple of years to coach people into TDD as a ScrumMaster and PM with many attempts and zero success.  Embedding and mentoring can succeed.  But it takes a long time.

Where to from here?

I don’t know how much further this team will manage to go into XP or any other agile practices.  But I do already know they have come further than I imagined a year ago.  And that is awesome.  Hopefully we’ll continue to circle in on good XP, better agile practices and possibly – more importantly – get some really good design going as well.  The scene is now set to allow that to happen.


A Code Retreat

This blog post has been long delayed in writing.  At the end of June I attended a Code Retreat hosted by David Campey under the Cape Town Software Developer’s meetup.  It was a good experience.  Here are some much delayed thoughts on my experience.

If you don’t know anything about Code Retreats – take a listen to Corey Haines’ –

The day was split into 5 x 45 minute sessions with a debrief / retro after each session.  We worked on Conway’s Game of Life and paired doing TDD in each session.  It was a diverse group with about 10-12 attendees and a range of experience.  One or two had been TDDing for years but most people probably haven’t been TDDing all day long at work.  More than 50% were experienced devs with many years under their belt.  Everyone was keen to Pair and TDD all day long.  The energy was really great.

The sessions

My first pair was with someone I’ve known since high school but I’ve never written code with before.  It was the first time either of us had seen the problem so we took some time to warm up.  We (I) wrote really crap code.

My second pair was with someone I vaguely know and who has been TDDing for a long time. The problem space wasn’t new to him so we dug into a side issue – which is the beauty of the problem.  There are so many places to go.  We looked at how to represent an infinite space and started writing tests around that.  This went pretty well.

My third pair was with someone I hadn’t met before the day.  This was interesting as we looked at how to abstract the space that the rules were working against.  We hid them behind an interface that then implemented the 3d space.  However in theory this could have been replaced by a 2d space – or perhaps a network topology – in order to implement the answers to the questions the rules were asking.  We focused in on a couple of the rules.  And we spent a bunch of time making the implementation beautiful.  I suspect this was the best code I was involved with writing.

After each session we had a debrief / retro.  That got some thoughts going as well as discussed some frustrations with what had happened in each session.  This dug into some interesting interplays around fulfilling the tests vs. implementation the real workings of the code.

After the third session we took lunch and then got back to the coding.  We all agreed upon 2 more sessions.  The first session after lunch was to be silent coding.

I paired with someone else I had never met before.  We were into silent coding so we chose to work on the most defined problem – how to represent the rules – first.  This was the furthest I had got into the rules.  We did some interesting stuff but got stuck on the 4th rule.  Because of what we had chosen, we managed to communicate relatively well through just the tests.

The final session I paired up with someone who had been on my team at a previous company.  We decided to aim for implementing the 4th rule.  And we were going to try and do it without loops.  That was the goal – but we got completely stuck on trying to make the 4th rule vaguely work at all.  We bit off far more than we could chew with a single test and were writing code and attempting to debug it.  It was a great lesson as we went for the acceptance level test and then attempted to write code to meet the acceptance level test but we got lost in the complexities of the code – and had some weird subtle bug somewhere.  This showed us beautifully why you need to build up all the code in baby steps.  One step at a time to get to the bigger acceptance level test.  We were starting to do that at the end.  It was the most frustrating session as we really didn’t succeed in any of our goals.


It took me to the second session to actually switch on into a code writing mode.  This was interesting to me.  Perhaps next time I should do a kata before I start in the hopes that I’ll be thinking better by the time I start.

I also realised that I generally take a little while to let something to emerge before I start to clean up the code with refactoring steps.  We often didn’t get to that level of emergent code before the 45 minutes timebox ended.  This makes me wonder if I should try and tighten up that cycle.  Or am I slow to commit to the structure that I am seeing and do I just need more code before I can see the design and this problem isn’t going to let me penetrate that in the 45 minutes provided.  That makes me wonder if other katas are better designed to emerge beautiful code designs with the problem space being are smaller.  But then again – that isn’t the point!  We want to be able to do something beautiful in 45 minutes.  Learn how! (I suspect.)

The final session definitely highlighted the risk between just doing acceptance level tests vs building the code with the tests.  That was the biggest learning for me – though I already knew it – I now have an experience to draw on to illustrate this.

I didn’t learn a lot more about TDD than I already knew from the kata’s I’ve been doing and with trying at work.  But that is fine as it was really truly great to sit with other like-minded people and practice.

Some things to try in the future – though possible not in a code retreat format

It would be nice to find a problem where mocking or other techniques become relevant so that we can learn from each other in how to approach the design of a larger problem.  I’d like to see how others are designing and building their systems to achieve SOLID or effectively do DDD and good design and architecture – but I suspect that is far beyond the scope of a code retreat.  I still look forward to doing katas in different languages – hopefully I’ll get to do a code retreat with that as well where I can learn some cool nifty stuff about JavaScript or the like which will turn my thinking around.  I look forward to it!

Something else I’ll try next time is to articulate how I’m feeling during the session.  This was a suggestion from the Design By Exploration session at CodeLab.  I suspect there could be some interesting learning to be had.

I look forward to the next code retreat – with great people and trying to learn more about what great code is.  Thanks to David for making these happen!

Lab: TDD, Pair Programming and Purposeful Practice

The organisation that I currently work for has two really nice practices.  Once a month there is R&R day – research and reading day – come to work and research something that you choose to research.  The other is TEK day – a mini internal conference focused on mini presentations and in the afternoon a lab.  This is also once a month.  Both of these days focus on no client/project work.  The agenda is driven completely by the employees.

At the last TEK day I ran a lab on TDD, Pair Programming and Purposeful Practice.  The goal was to give everyone an idea about TDD as well as one way to pair – ping-pong pairing – and to experience it doing some standard coding katas.

Here is the lab – TDD Pairing Practice Lab – if you’d like to read the details.  It gives a very brief overview of what TDD, pair programming and purposeful practice all are.  (Very brief… I did it the night before and I’m not sure it was sufficiently detailed, but there are several links and it gives you a start on the ideas so you can try them out.)

What did I learn?

I may have introduced too many things at once
I wanted to link ping-pong pairing and TDD together as it feels obvious and I’d been doing it with one of my team members.  Sadly he was on leave so couldn’t add his perspective at the session.  That said, possibly I should have focussed on TDD only first.  And then I could possibly have done another lab later in the year on pairing to add that in.  Doing too many things at once may have confused the process.

It is not just about the devs
Pairing gave the opportunity for non-devs to pair with devs and get a better feel and understanding as to what TDD and unit testing is.   I think it was insightful and helpful to them.  It also highlighted that this could be possible in your day-to-day work – exposing the non-devs to what you’re doing and how and increasing the communication flow between team members.

Requirements are hard
I had a typo or two.  And it was clear in my mind as to the goal of the simple requirements of FizzBuzz, but many questions were solicited around when things became a combined FizzBuzz instead of just a Buzz or Fizz.

Stress it’s about the TDD and code
Some groups spent a bit of time ensuring the console app worked.  The focus for them was meeting the criteria of printing out the numbers when the goal really was the process flow of learning TDD.

FizzBuzz took longer than I expected
We took much of the 2 hours on FizzBuzz.  Which was interesting by itself.  I expected things to go faster and hence included the more interesting StringCalculator kata.

Question 6 wasn’t very meaningful for FizzBuzz
I threw some food for thought in question 6 – how would you write it more procedural / object oriented / functional / etc but on such a simple problem it might not be as easy to see.  In the StringCalculator I think the questions might be more relevant.  And of course this had nothing to do with anything related to the core learning objectives of TDD and Pairing.

No one refactored (much)
It is red-green-refactor, but I didn’t see much refactoring towards beautiful code.  Perhaps I’m switched into a certain frame of mind at the moment as I’m re-reading Clean Code again.  But I suspect I’ll need to do a lab on readability and attempting to be DRY.  As well as when to refactor to new classes, etc.

Would anyone try it?
This was very non-committal.  But I think everyone had fun and everyone had better clarity on what it all meant.  Again – combining pairing with TDD – and asking “would you give it a try?” encountered a lack of enthusiastic response.  Maybe if it had just been one concept it would have been more interesting discussion.

One piece of feedback was that the ping-pong cycle was too short.  We did discuss a little that most likely it may not be as short in a complicated piece of code and the examples we were using for FizzBuzz definitely weren’t complicated.  But there was some concern around that.

Open to the discussion
The best part of course is that I had an open forum to present and everyone enthusiastically gave it a try.  That is awesome.

What would I change?

Ask more probing questions in the debrief.  Generally plan the debrief more deeply in order to generate more insights from and for the individuals.
–          Who refactored their code?
–          Who thinks their code is the most beautiful they could achieve?
–          Who changed their code while keeping the tests running and refactored in small steps?
–          Was it useful pairing with someone?  To learn?  To understand the requirements?
–          What did you learn?  What was really interesting? What was frustrating?

Do less, help more
–          I spent a little time challenging one group’s implementation of FizzBuzz which resulted in them refactoring.  I should have done that more to see who else refactored instead of going from “it works” to “next problem”.

I’m sure there is a lot more – including a bunch of the stuff in the learnings.  But overall I’m pleased with having done one more baby step in moving my organisation to at least understand XP practices.   While at the same time I continued to experiment and learn more deeply as well.

Exceptional Teams need Exceptional Practices

The November SUGSA event featured Austin Fagan talking about what makes an exceptional team.  A collection of exceptional people doesn’t necessarily make an exceptional team – and Austin posited that the usage of exceptional practices is what can turn a collection of individuals into an exceptional team.

Having experienced a team that is definitely more than the sum of the individuals – I can emote to the sentiment.  And equivalently having worked with some exceptional individuals in the distant past – there wasn’t a great deal of real team work.  Collaboration and other solid development practices make a huge difference.

The evening included a sequence of “does your team do X” slides – sit down if no, stand up if yes.  I was glad that I could at least stay standing for many of the soft skills – empowered team, collaborating, working as a unit, etc.  But sadly as always the practices of software development like pairing or TDD are lacking in adoption in my personal experience.

The evening proceeded with a session where we broke into groups and then discussed a topic.  We landed with pairing and were looking at issues around what is stopping you from starting to pair today as well as how do you keep pairing when you have started.

Pairing is an interesting thing to me in terms of adoption but it was refreshing having the conversation with some people who are dedicated agile people all coming to the same conclusion.  Our summary was something along the lines of  “It could be awesome, we don’t really know, so we should try it to find out”.

The standard pros came up – continuous knowledge sharing, no dependencies on a single person, continuous code review (or at least code discussion/justification which should make better code and sanity checking of code), potentially more productivity (can’t work avoid on the internet in a pair as easily).

The standard cons also came up – it wastes time, you need to synchronise office hours for pairs, some developers can only work alone, how do you know it works, we’re changing the way you work while having no evidence that it really does work.

Evidence please…
Sadly in the world of development practices there isn’t a lot of evidence in the public domain for software practices such as pairing – just a lot of anecdote that it is the Agile Way.  I believe there is some evidence on code reviews – which lends some credit to pairing being beneficial – but not much that anyone sites emphatically to prove that pairing delivers better software, faster, more bug free, or some other quantitative measure in comparison to not pairing, or pairing just some of the time.  Though I suspect in the world of delivering business software not too many are going to provide the data for the experiments.  I know there is some evidence for certain design practices – such as DDD – to spread the logic and hence the complexity across all the objects hence reducing the maximum complexity of any one object.

Pairing adoption is a tough one.  I never did much pairing in my time as a developer or architect.  I now wonder if I should go back to learn how so that I can be more emphatic about it one way or another.  In fact, I’ve never experience anyone really pairing – by the definition of doing it all day, 1 machine between 2 people, writing code (and tests).  Obviously developers pair in order to solve hard problems sometimes, or when there is a slog of work to get through which is more effective with one person typing and one person following the documentation for implementation (read really big CRUD screen), or when their machine is acting up and there is nothing else to do. But I haven’t worked with any developer who actively wanted to pair on a regular continuous basis.  I’ve met many who vehemently dislike the idea.  And some who’ve tried it and discarded it.

Pairing Adoption
We dug into this a bit and all agreed the biggest issue around pairing is “What is pairing” – what are the pairing patterns – and how do you do it right.  I’ve recently read this InfoQ article which talks to pairing as a pattern.  It is an interesting read if you’re trying to think through the options – but it’s by no means the answer.

Doing it wrong is going to be detrimental to your ability to adopt pairing as it’ll give those who “know” it isn’t going to work more ammunition to inspect and adapt away from pairing.  The key things seem to be training and experience – working with an experienced pair programmer to understand how it works.  That is a tough one as I suspect it’ll take a reasonable amount of time immersed in the performance of pairing to learn what the steps are automatically. Pairing seems to be one of those things, like TDD, that possibly is a performance art that one can only learn properly by seeing it working properly in order to reach those “Ahah!” moments.

TDD on the other hand – or at least automated testing – is a lot easier to sell in terms of why to do something like it.  I’ve spent a bunch of time pondering pairing and TDD and other practices.  Pairing as described above I find harder to sell. TDD on the other hand – or BDD, test first – even test last – I’m more than confident to sing the praises and provide the high level problems that we’re solving.  I want to be able to release without doing a full manual regression test over the next month – how do I do this?  I’d like to be able to do a quality release every 2 weeks – how do I do this?

I worked on system that we released every 2 weeks to production that had no testers and a full suite of automated tests – it was incredibly successful at deploying often.  At this organisation I mostly did test last but it was always very valuable doing the testing.  I would do it test first if I were to do it again as I think it might be faster than test last and the capability to test your design as you write it seems appealing.  I did a little bit of test first at the start of my time at my current organisation as it allowed me to determine how I wanted to call the code and that defined how I built the outer bits.  That was very valuable.  It didn’t last though due to my lack of dedication and shifting roles – though the original tests that I wrote were still being used to refactor that code 2 years later and saving the developer’s bacon.

Automated testing adoption
The key failure that I’ve been finding in adopting automated testing – TDD or otherwise – has been in the lots of little gotchas when the system isn’t as testable as you’d like.  In the organisation where I was successful with this we were using Perl and could simply modify functions on the fly in tests.  It was awesome, but really dodgy from a purist computer scientist point of view.  In .NET this is harder.   And if you don’t plan for testing in your architecture up front, retrofitting it is a pain.

Fundamentally my experience with automated testing is that developers can be convinced of the value and can get excited about getting it done and including it into every build.  But each time you hit another large wall in the way of getting a new type of problem solved in a sustainable and non-fragile way the closer the enthusiasm comes to falling away.  The frustration seeps in and the dedication seeps out.  That is sad.  And it is what makes TDD and any type of automated testing hard to sustain.

Are we doomed to fail at these practices?
I don’t think we’re doomed to fail at these practices.  But I do think that we need to think long and hard as to why we’re doing some of them so that we can continue to keep the enthusiasm and dedication up in doing them.

Fundamentally the software developers I’ve worked with over the years – myself included – have a lot to learn about how to make these practices really work for them and the organisations they work for.  I’m hoping someday I’ll have learnt some more and will then be able to help others to understand and justify to themselves why these are awesome practices.  I’m looking forward to that day – as then I should be working with exceptional practices in exceptional teams hopefully also with really exceptional people.  And it will be beautiful.