Principles, Patterns and Practices

Agile Principles, Patterns and Practices in C# by Robert Martin, aka Uncle Bob, is another good, accessible read on how to design code.  This book is aimed at a higher level of abstraction than Clean Code and with that comes more good insights into ways to design code.

If you have heard about the SOLID principles and have not read this book (or the earlier Agile Software Development Principles, Patterns and Practices), you should.  You may learn some interesting insights into what Uncle Bob thinks these things mean which you may or may not have thought through yourself.  In the murky world of software development where things mean a half a dozen different things to different people, going back to the person who originally wrote about the terms can often generate insights that may have been diluted or diverged from the original thinking – for good or bad.

Emergent Design and TDD

The emergent design / TDD example is an inspirational sample of the art.   The bowling ball kata is focused on a pairing session that results in far different – and potentially far simpler – code than attempting to do full OO analysis up front.  The bowling ball example does not get implemented with objects representing classic nouns from bowling, but instead a much simpler design evolves that meets the needs of the requirements.  I feel that this is the best example that I’ve seen of where TDD works to simplify code and allows the design to emerge instead of being dictated up front.  This is achieved by initially doing the simplest thing; and then being able to refactor freely underneath the API that is set up to meet the requirements.


A chapter is dedicated to each part of SOLID.  Despite knowing a reasonable amount about SOLID from both Clean Code and other reading, I still picked up several new insights along the way.

In dependency inversion – the idea of the consumer owning the interface – not the implementation was new to me.  It is interesting to experiment with this idea and try it out to determine what it means to the code I work on.

I enjoyed the fact that Uncle Bob does talk about caution / moderation in using SOLID and patterns.  For instance – when do you apply the Open / Closed principle?  Well a reasonable answer is when you are making a change to the code – then close it against changes of the same kind in the future.  Don’t proactively attempt to protect the code against all potential and fantasized possible changes as most likely you’ll miss the actual way it will need to change, while complicating the code.

When does something violate SRP?  When it needs to change for more than one reason.  But if it isn’t changing perhaps it doesn’t matter right now if you can look at the code and potentially see multiple responsibilities – as long as the code is concise enough and the design isn’t getting in the way now.  When it does change then perhaps extract the responsibility that is changing using SRP to guide that decision.

I can see some enthusiastic zealots reading this book and the SOLID chapters in particular and going out into the world with the One True Way to write code – with interfaces on everything and closed to all possible changes in the future and so on.  This isn’t what is being suggested, but I can see some who desire the Answer™ coming away from the book with that as the Answer.  Based on my reading – particularly on people commenting on Dependency Inversion overuse – this has already been done in spades.  That isn’t the idea.  The idea is to understand the principles so that they can be used effectively to help you build code that can change more easily in the future.

Coffee Maker example

The Coffee Maker example is an example of what a zealot may take away from this book.  The end result is an awesomely designed pluggable thing using all the principles and several patterns to the max.  It is a thing of beauty.  And it is something that I have never experienced needing to implement and would be massive overkill for most business applications that I’ve worked on.  The beautiful polymorphic design is often excessive overkill in real world business apps – and excessive excitement about plugability that isn’t a requirement often leads to convoluted messy code bases – something that I don’t think Uncle Bob is pro and something that I have seen (and sometimes done…) while search for “good design”.

A later chapter in the book discusses how to slowly refactor towards a better design as requirements come in.  The example shows the process of refactoring towards a given pattern.  That is the key and most important take away from this book that I suspect some may miss.  The coffee maker example should not launch itself onto the world in its final perfect polymorphic form.  The requirements should drive it there.  And the beautiful design that emerges based on those requirements is really interesting to analyze and appreciate – assuming the requirements needed it to emerge.

UML… still is boring to me

There are several chapters in the middle of the book that are around UML.  I know why they are there.  I appreciate it and thank Uncle Bob for the effort.  However I had to drag myself through those chapters.   For some reason I find UML really boring.  Though I know I shouldn’t.  I know I should use it, and I know I should use it right.  Maybe when I grow up more I’ll get back to those chapters and use UML better.

However – the key take away is that design is not dead.  Drawing pictures as conversation pieces to convey design is useful.  And UML is a tool to help developers communicate more effectively.  Communication is very important.  So do it.  But don’t write too much of it… rather code.

Patterns, patterns, patterns!

The final chapters of the book are dedicated to different patterns – such as the bridge and adapter patterns.  There are several concrete examples of how these patterns can be used as well as an example of refactoring, based on incoming requirements, towards a specific pattern which ends up with a more SOLID design.

Again, the patterns shown are end goals, driven by requirements that make the design change.  As the design changes it becomes obvious that one pattern or another may help to keep the code clean.  At that point, the code is refactored towards a pattern.  It may occur that other design changes make the pattern no longer useful.  At that time, refactor away.  The idea isn’t to build the code out of the blocks to support a selection of hand picked patterns.  That would be possible, but the requirements haven’t driven their need and so the code may be far less simple and effective than otherwise.

Drive from the requirements

This was a great read for me. It gave me new insights and drove my understanding of some things.  I always enjoy that.  It also highlighted for me the continued need to drive from requirements and refactor towards useful patterns rather than trying to force them up front.


Bitten by Simple

I’ve recently done a piece of work around OAuth and integration with DotNetOpenAuth.  I wanted to build up the system doing the simplest thing at all times with the hopes of achieving the leanest, meanest code that I could for the implementation.  This worked well overall.

But… when building up a certain repository call to get a user – it was only being used by one controller (and still is) so I did the simplest thing and only built what that controller needed.  Life was good.

Then I changed my mind.  And I assumed that because my tests passed that my code was all working.  So I started using another user field that wasn’t being populated as it needed an additional DB call, and wasn’t needed initially.  And I got an unexpected bug.

But I do TDD.  My tests passed.  All should be awesome.  I have the green light.  I can deploy.

Wrong. Hmmm…

Doing the simplest thing I violated the principle of least surprise. I was surprised to not get a fully formed user from the user repository.  That was wrong.  Yes, the tests very clearly show me not testing the specific field and adding the test showed it wasn’t being set.  But the repository should always be returning a fully formed user object always. Yes, I should have remembered, but I’ve got a pretty good memory and I still assumed it was all working.

So what should I have done?

a)      Not violated the principle of least surprise!!
b)      Actually checked what the tests did – they are the specification
c)       Test higher up at the controller level where possible.
d)      Introduce integration tests

The problem with integrating to DotNetOpenAuth is that the controllers call it and then it calls us.  And there is very little code in the controller that we control in order to make it all happen.  But there is some.  In this case the combination of the login page and the resource fetch (after the OAuth interaction) broke.  But setting up a controller to have the correct stuff on it in order to give DotNetOpenAuth would involve me going very deep into understanding DotNetOpenAuth and what it needs.  I don’t actually wanting to test DotNetOpenAuth.  I do want to test my code around it.  As a result the controllers are being kept as simple as possible, but there is no automated testing around them.  And now I’ve been provably bitten for not having those.

And maybe I should actually want to test the interaction with DotNetOpenAuth – at least at the basic level.  An automated simple happy flow integration test would be useful here.  I’ve already got a manual test, but of course I got lazy.

So in future I plan to:
– Not violate the principle of least surprise… it isn’t worth it even if it is less code / simpler.
– Dig deeper into the synergy between unit tests – testing the code I’m writing – and integration tests – testing the full stack integration particularly the parts that are harder to validate otherwise.

And of course, continue to write the simplest code I can and retrospect when I get bitten next.

A simple failure

Do the simplest thing.
I’ve also heard: Do the simplest thing that you can refactor later.

In TDD, you attempt to do the simplest thing to allow the real design emerge and to not add unnecessary code that is not needed.  This tends to reduce the complexity of the code.  There is plenty of writing about this – such as and

More than a year ago I did a new piece of work.  The team completed it and I went back to look at the code and saw that I could simplify it in several ways.  One of those ways was to remove an unused parameter as it was always the same id for all the objects in the list returned.  We did the work and life was good.  I felt that we were achieving simple and changeable code.  I felt good.

A month or so later we needed to change the way we fetched these objects to now be fetched so that the single id was now actually varying.  A developer attempted this task for two days and we gave up doing the work as we had a simpler solution that the client was happy enough with.

I felt like the whole point of developing with tests and being safe was to make the code easier to change and here I failed.  Miserably.

So I retrospected…
Perhaps the removed complexity was important all along – as very rapidly it was needed.
Perhaps the code base isn’t as intuitive as it could be.
Perhaps the dev wasn’t good enough to see how to change the code easily.

All of these may be true.  In particular I observed two things:

  1. Misinterpreting the need: The usage that we had from the UI that we could see was simple – needing only the objects for 1 id at a time.  The common usage of related lists was often with varying ids.  I had assumed that was a complexity added unnecessarily, but in retrospect it actually was the needed requirement when analysing what the callers were really doing at a higher level.  The problem being that the callers aren’t doing that and the code is highly convoluted and hides that – but they should be.  Since then I have also change the way I look at building up things like this – in order to make things simple – and this would also have led me to the correct need of loading with varying ids initially.
  2. Expectation to change: I knew that the code was likely to change in this direction and I actively removed it in a pursuit of simpler code.  Perhaps looking ahead a miniscule bit is valuable to support the open/close principle in the short term.  If I had left the complexity it would already have been able to deal with this change.   I’m not 100% comfortable with this thinking.  In Agile Principles, Patterns and Practices in C#, Uncle Bob suggests only making such a change when the code changes in that way.  Guided by that principle, it shouldn’t have been there.  But in this case it was there already (it wasn’t test driven initially… this part was inherited.)  I do suspect misinterpreting the need is more the root cause, though this is worth pondering and entertaining and experimenting with the tension that gets introduced.

So maybe it is useful to think of: Do the simplest thing that you can refactor later while keeping in mind how the current design is already telling you it needs to be open for extension?

But more likely, simple is hard, and you need to dig deeper to ensure you have the depth of the real need the code is fulfilling.

Circling in on XP practices

Almost a year and a half ago I started a journey with a team.  I wanted to embed myself in XP practices.  I wanted to learn how things looked so I could maybe one day be better at helping other teams adopt some of those practices. Below are some observations about my own learning and that of the team’s adoption.

You may notice the use of “I wanted”. The team didn’t necessarily want any of this.  They simply were writing software.  Getting frustrated with writing the tests they were writing.  But they were happy enough.

Some of the things that I’ve learnt

– TDD is hard

– TDD isn’t adopted unless a 1:1 mentoring process happens on the real code that is being worked on.  Understanding is very important.  But showing how to apply it on real code that is part of the sprint delivery is even more important.  Showing that someone else (who hopefully has an idea of what they are doing) also runs into pitfalls on how a legacy code base is designed for testability – and showing some patterns to work around those problems – all while coding for a real deliverable on the system helps embed understanding and buy in to trying.

– TDD isn’t adopted unless at least one person champions it and the majority of the team are willing to be influenced and open to try.  One person championing it and no one else caring or trying will fail.  The more people who buy in and can help pair and show the process the higher the probability of success.

– Pairing while continuing to adopt TDD is very useful.  Your partner helps keep you honest.  You can discuss how to test what you’re trying to do.  You have a shared cognitive experience and learn together.

– Code coverage can help you understand if someone is trying to do TDD or not.  If there is none – it is obvious that TDD isn’t being done.  That can help to spark conversations. (But of course, code coverage numbers aren’t important – none, some and lots are what is.  And anything more than none doesn’t mean TDD is happening, just tests, but that is better than no tests!)

– Pairing is hard.

– Pairing with juniors who know the domain can be very useful.  Having two thoughts on the design and the requirements in order to validate understanding, to think out loud and even model out loud can be very valuable.  Pairing with someone completely new can however be frustrating for all sides.  Then it becomes pairing for learning, which is a different way of working again.

– It can be too easy as an architect to not pair as there is other non-sprint work to be researched, meetings to attend, code reviews to be done.  I personally need to pair more and find ways to not do “other” work.  Or to do it as a pair.

– Design is hard.

– Better designs will emerge by the need to test due to the need to decouple.   More cohesive designs may emerge if you’re careful.  But that requires understanding of what that means and looks like.  Doing TDD doesn’t lead to good design.  Being able to see the patterns in the code and to extract them into a good design leads to good design.  TDD gives you the time and freedom to do that.  But if your team can’t see it then TDD will only help you with your test coverage…

– Conversations about design will emerge and better architectures can be adopted that keep the code simpler.  Keeping the code simpler makes it easier for everyone.  The design remains simpler.  It is easier to not make a big mess as you’re watching for it getting complicated… if you’re designing.

– We don’t do continuous integration.  Most people in the team don’t care about a build on commit.  No one sees the burning need to deploy automatically. I haven’t pushed it as no pressing problems exist that would obviously be fixed by introducing CI.  I’d rather finding places where the need could be more easily validated and leverage those.  That said – a recent deployment showed up the desire to have an automated deployment for much faster turnaround on testing a new build so maybe there is a place to get people more interested and perhaps when that is there the understanding and desire to keep it going will grow.

How did we get here?

Over the last almost year and half I’ve shared my views on pairing, TDD, refactoring, agility in code – principles like the simplest thing, fast feedback and emergent design.  Some of them the team have chosen to try.  Some have reacted very negatively to some of the ideas.  I haven’t pushed any too hard.  I’ve led by example doing it myself. I’ve put effort into explaining the reasoning and thinking behind ideas.  I’ve gained a deeper understanding of why I do things.  And the ability to question the way I do things at all times based on a desire to optimise the agile values and principles, not worship the agile practices.

As a result of this we’ve circled in from testing sort of, after the fact, because we were told to.  Around to understanding how to test, what to test, and how to test first – but not necessarily testing first.  To pairing – the practice most quickly adopted – and with me driving it less.  To now having almost everyone TDDing – and those that aren’t doing that or pairing are starting to visibly fall behind in skills.

I do not think any of this would have been possible without someone showing and explaining from an embedded and trusted state in the team.  Outside coaching or training doesn’t stick.  I’ve talked to several people who’ve trained TDD and don’t do it at all.  I’ve tried for a couple of years to coach people into TDD as a ScrumMaster and PM with many attempts and zero success.  Embedding and mentoring can succeed.  But it takes a long time.

Where to from here?

I don’t know how much further this team will manage to go into XP or any other agile practices.  But I do already know they have come further than I imagined a year ago.  And that is awesome.  Hopefully we’ll continue to circle in on good XP, better agile practices and possibly – more importantly – get some really good design going as well.  The scene is now set to allow that to happen.

What Scrum has taught me about how to write good code

Every developer aspires to write good code.  Nowadays everyone is talking about clean code.  If you ask any developer most likely they will tell you that their code is good and/or clean code.

What makes a piece of code good?  Is it code that passes all the tests?  If it passes the tests, must it be correct?  Is code that meets the end user requirements good?  The story is done, so we can do the next one – surely that is good?  Is code that is layered good?  If all the methods are less than 10 lines and all classes less than 100 lines, is it good?

The biggest problem with “good code” is that all code is subjective.  And you only learn how crap it was after you’ve lived with it for a year or two…  then it becomes a pain if you didn’t write “good” code…  and most likely the person who wrote it isn’t on the project any more.  And maybe they aren’t in the company any more.  But they probably were very opinionated as to what good code was when it was being written.

Small pieces
Scrum taught me to value small pieces.  Small pieces lead you to composition of objects – breaking the system down into smaller objects that combine to become a greater whole.  Each part is very easily understandable.  Each part is very easily testable.  Each part is really very small.

This realisation can come from forcing yourself to focus on testability – which forces composition – which forces you to notice that you have lots of smaller things which are much simpler and far easier to understand.  And hence they are easier to maintain.  And the complexity of any single thing goes down.  Cohesion and coupling become clear.  Separation of concerns becomes a primary concern and it becomes easier to see.

Small things are good.  Small things optimise for simplicity and understandably.

Many pieces
This does come with the problem that you now have lots of little bits.  Now you need to understand how they fit together.  This could be conceptually intimidating if you try to hold it all in your head at once.  Concerns arise about what if it doesn’t all integrate correctly.

But you no longer need to hold it all in your head.  Smaller units of code combine into a single cohesive thing that has a well-defined interface and now you worry less about all the little things – you just worry about how you use the outside interface.  This is very similar to a backlog which can at times represent lots of potential stories but the ones further away are stored as epics.  The detail is only needed when you need to look more closely – and then you unpack them – just in time.

Trusted pieces
You land up with small things doing very clearly defined things that are easy to understand.  And you trust that they work as the tests specify how they should work and they still pass.  The more your system holds together like this along with keeping to small object graphs – the less the concern of integration becomes as your tests tell you how the code should work.  And you gain more trust in your system.

You do need to ensure that the implementations of the same interfaces behave similarly.  If you’re mocking an interface for testing things there could be a misrepresentation between the mock and reality.  But that is a code implementation / design problem that we already have.  Failing at that and having things unexpectedly coupled in odd ways that aren’t represented by tests / mocks is possible.  But you should understand both sides of the interface that you’re using in order to change the code around it.  That isn’t a new idea.

Knowing what you intend to do
Which leads me to knowing what you intend to do!  Doing TDD challenges you to really decide what you’re doing before you start.  It challenges you to be really in control of your code base – even the stuff you didn’t write where it influences you.  And that is a great thing.  Being in control of your code base means that the integration problem won’t happen as you really understand how it fits together on both sides of the interface before you modify it.

Small graphs = small messes
Developing using TDD and breaking things small also leads you to small graphs.  Small graphs help you to potentially make lots of small messes instead of a few large graphs and one very large, interconnected ball of mud.  Small messes can be individually fixed in a contained way.  Large messes are far more difficult to fix.

You will never know it all
Scrum and agile ideas embrace the fact that you won’t know enough to start with.  TDD enables safely not knowing enough.  It allows you to learn and refactor the system in safe small steps as your knowledge grows.  Refactoring saves us from having to have that perfect good code up front.  Refactoring allows that code to change and become completely different good code – more closely fitting to the actual current purpose – over the years.  Refactoring is what keeps the code base fluid and alive and closer to the reality of what the requirements are actually right now – not how they have been hacked on from the design created from the little that was known two years ago.  But refactoring can’t be safely achieved without tests around the code to be refactored.

Value Driven Development
When doing Scrum you move from Shu to Ha when you really understand the values and principles of Scrum.  When you understand them and really get them you can experiment with things based on those values and principles instead of blindly following what the Scrum guide says.  But if you still don’t actually understand you will probably get burnt (and blame it on Scrum).

This is similarly useful when writing code.  Understand the values you are trying to live by and optimise for them.  Reflect when you fail to achieve them in order to get better.  Write code with your eyes wide open so that you always know why you are doing what you are doing.

Optimise for something
I suspect many OO developers start out optimising for encapsulation and hiding behaviour.  Only optimising for that can lead to tightly coupled, large graphs and a ball of mud.

Scrum suggests deploying working software every 2 weeks.  How can you do that? Perhaps automation is required to give us the feedback as to the state of the code so that it can go live.  So optimise for testability that can be automated.

If I optimise for testability or reducing the risk of change it may lead to a less tightly coupled system and smaller messes.  I might discover things about composing objects and the SOLID principles.

If I optimise for knowing my software is working 100% or a fast feedback loop on changes that I make to my code then I might want to use TDD to enable that.

If I optimise for maintainability it might lead to more readable code and smaller functions and classes.  That might lead again to a less tightly coupled system and smaller messes.  I also will probably want to optimise for testability so I can get faster feedback about the impact of changes that I’m making in code that I’m unfamiliar with.

If I optimise for changeable or fluid software I may want to make things smaller and less coupled, I may be want to do TDD so I can get feedback on the effect of my changes as I make them.

If I optimise for the simplest thing that works, I may learn a lot about how code could be designed.

If I challenge myself as a developer as to what I’m optimising on – and know that this is a good thing – then I can consciously experiment and learn how to write the best code I can for the value that I am trying to optimise for.

Where does this come from?
This all might be obvious to many of you.  But to me, thinking hard about how to do better code, I’ve had interesting conversations and thoughts driven by a more agile mind-set over the last couple of years.  Several of those ideas were more clearly crystallised when attending an agile developer course with Aslam Khan and KRS late last year and experiments before and since.

I think “good” for me is becoming more defined – in a different way – from say 5 years ago.  I wrote fine code.  I could solve anything given enough time and relevant hacks if needed.  But eventually projects would get messy.  And frustrating.  And change would be unsafe.  And it would become harder to change.

A couple of years ago I stopped coding for my job.  I led an agile transition implementing Scrum.  I learnt a lot about Scrum.  And so I’ve relearnt how I should code with an agile mind-set first and foremost.  Now I’m practicing to actually understand how to implement those ideas.  And it is awesome.

I feel far safer now.  I feel more in control of the code base.  I feel safer to change.

Scrum taught me the wisdom to break things up small and to understand what you’re really doing.  XP translates that to the code level and gets you in control of your code base.

All of this leads to a far clearer understanding of why any line of code is there – where it is and why it should or should not be somewhere else.  The code is cleaner.  Hopefully it is good code.  Hopefully it is clean code.  But most of all – hopefully it will be easy to change to be cleaner or better as the design changes.  And I’ll reflect and learn why if I fail – so I can actively do better code next time.

Hopefully I’ll also learn a lot from all the great coders out there on how they too are doing better code.

A Code Retreat

This blog post has been long delayed in writing.  At the end of June I attended a Code Retreat hosted by David Campey under the Cape Town Software Developer’s meetup.  It was a good experience.  Here are some much delayed thoughts on my experience.

If you don’t know anything about Code Retreats – take a listen to Corey Haines’ –

The day was split into 5 x 45 minute sessions with a debrief / retro after each session.  We worked on Conway’s Game of Life and paired doing TDD in each session.  It was a diverse group with about 10-12 attendees and a range of experience.  One or two had been TDDing for years but most people probably haven’t been TDDing all day long at work.  More than 50% were experienced devs with many years under their belt.  Everyone was keen to Pair and TDD all day long.  The energy was really great.

The sessions

My first pair was with someone I’ve known since high school but I’ve never written code with before.  It was the first time either of us had seen the problem so we took some time to warm up.  We (I) wrote really crap code.

My second pair was with someone I vaguely know and who has been TDDing for a long time. The problem space wasn’t new to him so we dug into a side issue – which is the beauty of the problem.  There are so many places to go.  We looked at how to represent an infinite space and started writing tests around that.  This went pretty well.

My third pair was with someone I hadn’t met before the day.  This was interesting as we looked at how to abstract the space that the rules were working against.  We hid them behind an interface that then implemented the 3d space.  However in theory this could have been replaced by a 2d space – or perhaps a network topology – in order to implement the answers to the questions the rules were asking.  We focused in on a couple of the rules.  And we spent a bunch of time making the implementation beautiful.  I suspect this was the best code I was involved with writing.

After each session we had a debrief / retro.  That got some thoughts going as well as discussed some frustrations with what had happened in each session.  This dug into some interesting interplays around fulfilling the tests vs. implementation the real workings of the code.

After the third session we took lunch and then got back to the coding.  We all agreed upon 2 more sessions.  The first session after lunch was to be silent coding.

I paired with someone else I had never met before.  We were into silent coding so we chose to work on the most defined problem – how to represent the rules – first.  This was the furthest I had got into the rules.  We did some interesting stuff but got stuck on the 4th rule.  Because of what we had chosen, we managed to communicate relatively well through just the tests.

The final session I paired up with someone who had been on my team at a previous company.  We decided to aim for implementing the 4th rule.  And we were going to try and do it without loops.  That was the goal – but we got completely stuck on trying to make the 4th rule vaguely work at all.  We bit off far more than we could chew with a single test and were writing code and attempting to debug it.  It was a great lesson as we went for the acceptance level test and then attempted to write code to meet the acceptance level test but we got lost in the complexities of the code – and had some weird subtle bug somewhere.  This showed us beautifully why you need to build up all the code in baby steps.  One step at a time to get to the bigger acceptance level test.  We were starting to do that at the end.  It was the most frustrating session as we really didn’t succeed in any of our goals.


It took me to the second session to actually switch on into a code writing mode.  This was interesting to me.  Perhaps next time I should do a kata before I start in the hopes that I’ll be thinking better by the time I start.

I also realised that I generally take a little while to let something to emerge before I start to clean up the code with refactoring steps.  We often didn’t get to that level of emergent code before the 45 minutes timebox ended.  This makes me wonder if I should try and tighten up that cycle.  Or am I slow to commit to the structure that I am seeing and do I just need more code before I can see the design and this problem isn’t going to let me penetrate that in the 45 minutes provided.  That makes me wonder if other katas are better designed to emerge beautiful code designs with the problem space being are smaller.  But then again – that isn’t the point!  We want to be able to do something beautiful in 45 minutes.  Learn how! (I suspect.)

The final session definitely highlighted the risk between just doing acceptance level tests vs building the code with the tests.  That was the biggest learning for me – though I already knew it – I now have an experience to draw on to illustrate this.

I didn’t learn a lot more about TDD than I already knew from the kata’s I’ve been doing and with trying at work.  But that is fine as it was really truly great to sit with other like-minded people and practice.

Some things to try in the future – though possible not in a code retreat format

It would be nice to find a problem where mocking or other techniques become relevant so that we can learn from each other in how to approach the design of a larger problem.  I’d like to see how others are designing and building their systems to achieve SOLID or effectively do DDD and good design and architecture – but I suspect that is far beyond the scope of a code retreat.  I still look forward to doing katas in different languages – hopefully I’ll get to do a code retreat with that as well where I can learn some cool nifty stuff about JavaScript or the like which will turn my thinking around.  I look forward to it!

Something else I’ll try next time is to articulate how I’m feeling during the session.  This was a suggestion from the Design By Exploration session at CodeLab.  I suspect there could be some interesting learning to be had.

I look forward to the next code retreat – with great people and trying to learn more about what great code is.  Thanks to David for making these happen!

Lab: TDD, Pair Programming and Purposeful Practice

The organisation that I currently work for has two really nice practices.  Once a month there is R&R day – research and reading day – come to work and research something that you choose to research.  The other is TEK day – a mini internal conference focused on mini presentations and in the afternoon a lab.  This is also once a month.  Both of these days focus on no client/project work.  The agenda is driven completely by the employees.

At the last TEK day I ran a lab on TDD, Pair Programming and Purposeful Practice.  The goal was to give everyone an idea about TDD as well as one way to pair – ping-pong pairing – and to experience it doing some standard coding katas.

Here is the lab – TDD Pairing Practice Lab – if you’d like to read the details.  It gives a very brief overview of what TDD, pair programming and purposeful practice all are.  (Very brief… I did it the night before and I’m not sure it was sufficiently detailed, but there are several links and it gives you a start on the ideas so you can try them out.)

What did I learn?

I may have introduced too many things at once
I wanted to link ping-pong pairing and TDD together as it feels obvious and I’d been doing it with one of my team members.  Sadly he was on leave so couldn’t add his perspective at the session.  That said, possibly I should have focussed on TDD only first.  And then I could possibly have done another lab later in the year on pairing to add that in.  Doing too many things at once may have confused the process.

It is not just about the devs
Pairing gave the opportunity for non-devs to pair with devs and get a better feel and understanding as to what TDD and unit testing is.   I think it was insightful and helpful to them.  It also highlighted that this could be possible in your day-to-day work – exposing the non-devs to what you’re doing and how and increasing the communication flow between team members.

Requirements are hard
I had a typo or two.  And it was clear in my mind as to the goal of the simple requirements of FizzBuzz, but many questions were solicited around when things became a combined FizzBuzz instead of just a Buzz or Fizz.

Stress it’s about the TDD and code
Some groups spent a bit of time ensuring the console app worked.  The focus for them was meeting the criteria of printing out the numbers when the goal really was the process flow of learning TDD.

FizzBuzz took longer than I expected
We took much of the 2 hours on FizzBuzz.  Which was interesting by itself.  I expected things to go faster and hence included the more interesting StringCalculator kata.

Question 6 wasn’t very meaningful for FizzBuzz
I threw some food for thought in question 6 – how would you write it more procedural / object oriented / functional / etc but on such a simple problem it might not be as easy to see.  In the StringCalculator I think the questions might be more relevant.  And of course this had nothing to do with anything related to the core learning objectives of TDD and Pairing.

No one refactored (much)
It is red-green-refactor, but I didn’t see much refactoring towards beautiful code.  Perhaps I’m switched into a certain frame of mind at the moment as I’m re-reading Clean Code again.  But I suspect I’ll need to do a lab on readability and attempting to be DRY.  As well as when to refactor to new classes, etc.

Would anyone try it?
This was very non-committal.  But I think everyone had fun and everyone had better clarity on what it all meant.  Again – combining pairing with TDD – and asking “would you give it a try?” encountered a lack of enthusiastic response.  Maybe if it had just been one concept it would have been more interesting discussion.

One piece of feedback was that the ping-pong cycle was too short.  We did discuss a little that most likely it may not be as short in a complicated piece of code and the examples we were using for FizzBuzz definitely weren’t complicated.  But there was some concern around that.

Open to the discussion
The best part of course is that I had an open forum to present and everyone enthusiastically gave it a try.  That is awesome.

What would I change?

Ask more probing questions in the debrief.  Generally plan the debrief more deeply in order to generate more insights from and for the individuals.
–          Who refactored their code?
–          Who thinks their code is the most beautiful they could achieve?
–          Who changed their code while keeping the tests running and refactored in small steps?
–          Was it useful pairing with someone?  To learn?  To understand the requirements?
–          What did you learn?  What was really interesting? What was frustrating?

Do less, help more
–          I spent a little time challenging one group’s implementation of FizzBuzz which resulted in them refactoring.  I should have done that more to see who else refactored instead of going from “it works” to “next problem”.

I’m sure there is a lot more – including a bunch of the stuff in the learnings.  But overall I’m pleased with having done one more baby step in moving my organisation to at least understand XP practices.   While at the same time I continued to experiment and learn more deeply as well.