A design discussion, some options, and unidirectional models

When writing about discoverability in my last two posts, I came across the below code which violated that principle.  It was doing some meta programming in order to attempt to avoid repetition and to be more DRY.

def base_value_for(key)
    case info.send("#{key}_base")
    when 'value1'
        value1
    when 'value2'
        value2
    else
        raise ArgumentError, "Base keys must be value1 or value2"
    end
end

The caller then would be able to find the 3 base values by passing them into the base_value_for method.  The methods one_base, two_base, three_base are known and exist on the info object which is receiving the public send.  The UI has ensured they are set to either ‘value1’ or ‘value2’.  The current object has a method to return the value for value1 and the value for value2.  So this code is essentially allowing the initial base value for a calculation to be configured.

The caller would then call one of base_value_for(‘one’) OR base_value_for(‘two’) OR base_value_for(‘three’).  Calling with any other value that didn’t result in a method on the info object for key_<value> would blow up.

What is wrong with this?  Take a read here.  TL;DR – if you’re in the info class and search for ‘one_base’ in the code base there will be no search hit that tells you that this code is being called from here.  So you may assume it isn’t being called and refactor it.  The principle of discoverability is violated.

This code was written to avoid writing:

def base_for_one_base
    case info.one_base
    when 'value1'
        value1
    when 'value2'
        value2
    else
        raise ArgumentError, "Base keys must be value1 or value2"
    end
end

and repeat for two and three.

An alternative to the above non-DRY implementation is

def base_for_one_base
    base_for(info.one_base)
end

private
def base_for(info_base)
    case info_base
    when 'value1'
        value1
    when 'value2'
        value2
    else
        raise ArgumentError, "Base keys must be value1 or value2"
    end
end

Now the difference between base_for_one_base and the version for two and three is just a single line.

def base_for_two_base
    base_for(info.two_base)
end

def base_for_three_base
    base_for(info.three_base)
end

This is probably DRY enough.  We could probably stop here.

An alternate design

The existence of the info model is to hold the knowledge about the configuration.  So an alternative design could be to push the decision logic into the info model.

The problem with doing this is that the method is also using the value1 and value2 methods on the parent model.

The one solution is to Just Use It.

# info class
belongs_to: parent

def base_for_one_base
    base_for(one_base)
end

private

def base_for(info_base)
    case info_base
    when value1
        parent.value1
    when 'value2'
        parent.value2
    else
        raise ArgumentError, "Base keys must be value1 or value2"
    end
end

This sets up a cyclic dependency.  The Parent model has an info model.  The Info belongs to the parent model.  The info can be loaded and accessed by its parent directly.  The parent can be loaded and accessed by the info model directly.  But what if code were written that caused code to be accidentally called recursively between the two models?

On this small scale, it might be okay.  On a larger scale it might be much harder to reason about across several models.  And multiple models may be doing this in a highly interconnected way – and you then may have one large ball of interconnected models – or mud.

Unidirectional models as a design constraint

What would the code look like if we didn’t accept cyclic dependencies?  If all interactions with a model were always from a parent to a child.  That could make the code simpler in the long run as at least half the number of linkages would exist.  One for each direction vs. one for the one direction.

The code in this example would then look like:

# parent class
def base_for_one_base
    info.base_for_one_base(value1, value2)
end

# info class
def base_for_one_base(value1, value2)
    base_for(one_base, value1, value2)
end

private

def base_for(info_base, value1, value2)
    case info_base
    when 'value1'
        value1
    when 'value2'
        value2
    else
        raise ArgumentError, "Base keys must be value1 or value2"
    end
end

The negative here is we need to pass in the data from the parent that the child needs.  Perhaps that data shouldn’t live on the parent, but rather move to the child.  That might be a better design and our code is telling us that, but it was something we didn’t want to move yet.  However that knowledge is not exposed to the consumer.  The implementation of getting the rate for the consumer is only via the parent.

Simplicity

I favour one-way connections over bi-directional ones as they are simpler to reason about.  They result in cleaner, contained messes that are easier to understand (and potentially replace).

This contrasts against the effort required to break down the keen desire to have bi-directional connections that Rails encourages and makes so easy.  SQL data integrity also desires bidirectional connections.

It might result in jumping through some hoops to achieve.  But then the design is speaking and the constraint is possibly telling me that the design of the data might need to change.

I favour smaller, contained messes to make reasoning and understanding of the code easier.  One way to achieve that is through unidirectional models.

Advertisements

Discoverability – a design choice

I recently blogged on discoverability being a naming choice. I talked about how the choice of name may make changing it later easier or harder.  What would happen if we start using the qualities of dynamic ruby to do some meta programming – how would that influence a future developer’s capability to discover how the code works?  How does this break the model of “Find in Files” discussed in that post.

Let’s start by making it worse

Imagine an Active Record model, Tour, with a column full_price_for_tour in the DB.  In a vanilla rails project, searching for full_price_for_tour in the codebase may result in no hits at all.  Equivalently, with looking at a caller that calls full_price_for_tour on an instance of Tour, we will not find any reference to full_price_for_tour in the class file for Tour.  For new Rails developers this can be very confusing.

The programming model that the active record implementation is helping us with is potentially a useful one – dynamically determining the methods from the database and creating them on the object.  But it is harming discovery of how the code works.

So how do we help developers discover where the code is?

In a Rails codebase the annotate gem comes to the rescue.  It annotates the model classes based on what is in the DB for the matching table.  This allows a developer to discover the list of dynamically created methods that they can call on the model object – and hence what data the model object does expose.  This is a Good Thing.

Searching for full_price_for_tour will have a match in the Tour class file – as a comment in the annotations.  The developer now knows this method is the column in the DB as the annotations are allowing that discovery.

And then someone gets clever

The Active Record implementation leverages the dynamic qualities of Ruby to do something useful for developers.  All dynamic meta programming may not always be beneficial.  There are always trade-offs in software design.

Some production ruby code I saw recently implemented something along the lines of:

  def self.method_missing(method_sym, *arguments, &block)
    @hash_obj[method_sym]
  end

This code was written to provide helpers to access a hash’s properties with method calls.  This was a convenience method.  And there were some other helper methods defined in this class in order to work out some standard answers to questions that the hash provided.  On the face of it, this looks like a clever use of ruby as a dynamic language.

But how does another developer discover all the methods that this class responds to?  We need to find the hash definition to discover that.  In this case, the hash was from some json from a HTTP POST.  The simple question of what can we expect this object to answer to was not codified in the code at all leaving all developers on the team very unsure what the correct method names were.

Going back to the refactoring example.  When, assuming this was the Tour class and we were calling full_price_for_tour on this object – how would we find out what the implementation was?  First we’d fail to discover one with a “Find in files” type search.  Then we would have start spending some time working out why there wasn’t one and what the magic was that made it work.  As a developer this is time wasted.  Even worse, when the question is “what is the full interface”.

Another clever thing

Some ruby code I can easily imagine is:

  def self.method_missing(method_sym, *arguments, &block)
    method_as_string = method_sym.to_s
    split_methods = method_as_string.split(‘_’)
    obj_to_call = split_methods.shift
    method_to_call = split_methods.join(‘_’)
    obj = self.public_send(object_to_call)
    obj.public_send(method_to_call)
  end

Assuming this code is in the Tour class, we now can call fullprice_for_tour* on the Tour class.  This will then get the fullprice object inside this instance and call the for_tour method on it.

tour.fullprice_for_tour would be the same as tour.fullprice.for_tour.

* I’ve changed the method name to fullprice in order to make the code example simpler.

This kind of code is clever.  But it stymies discoverability again.  When I search for the method fullprice_for_tour I will be unable to find any definition of it anywhere.  I now need to investigate the Tour class file in order to determine that there is a method_missing handler, and work out that actually we are calling fullprice on that class and for_tour on the FullPrice class.  Now I can find the code.

The simple model of searching for the implementation is broken by this coding style.  Searching now becomes an iterative process when nothing comes up.  Which takes longer.

And then there are Rails delegates

In Rails you can add to the Tour class

  delegate :for_tour, to: :full_price

which enables

  tour.for_tour

to be the same as tour.full_price.for_tour

You can even add prefixes

  delegate :for_tour, to: :full_price, prefix: :full_price

which now enables

  tour.full_price_for_tour

to call tour.full_price.for_tour

This saves a developer from writing

  def full_price_for_tour
    full_price.for_tour
  end

in the Tour class.

We save writing a method definition.  But the discoverability is hurt – particularly when the prefix is used.  We now have to do multiple different types of searches in order to discover where full_price_for_tour is defined.  And we need to remember to do that.  And as determined, there could be multiple different ways in which the method could be defined dynamically.

A Hypothesis

The cost of discovery should be at least N times lower than the cost to write the code.  Where N is the total number of times the code is to be viewed and understood up until it is deleted.

I would hypothesise that the benefit of not having to write a trivial method definition makes the discoverability of the method take at least twice as long.  In general my first guess will be wrong.  I have to guess at least once more – looking for delegates.  But then again, it might be another dynamic way, so I might need to keep on guessing.

The design choice of coding this way results in a codebase that on average takes longer to discover things in.  Which means over time, software will take longer and longer to be delivered.  As compared to the constant cost at the time of typing a little more.

The constant cost of typing the method definition occurs once – when the developer writes it.  The cost of discovery occurs every time a developer needs to understand where the code is defined or from where it is called.

So is dynamic meta programming ever justified?

For the majority of developers, the core type of work is business applications that mostly do CRUD operations.  Use cases are driven by actual requirements.  Actual requirements are concretely defined.  They should have concrete tests that define them.  Using dynamic meta programming is almost never required.

Sometimes the code is doing the same thing behind those concrete interfaces.  The code may want to be refactored to take advantage of dynamic techniques to reduce duplication and expose a new level of abstraction.  This can be valuable when things are in fact the same.  If the abstraction makes the system easier to change, this is good.  But these changes should be done beneath the concrete definitions of what the system does.  The system is a set of concrete interfaces and use cases that have concrete tests.  That is what allows us to refactor to a more abstract design below the covers.  As the underlying code becomes more abstract, the external interface and the tests calling the interface remain specific.  The abstraction should not be the exposed interface of your average business application.  The abstraction should not make it harder to discover how the system works.

Observations

Many developers value locally optimizing the time that they save writing code.   At the same time, they ignore the amount of time they cause someone else to waste when attempting to work out the implementation at a later date. Most code is write once, read many.  Optimising for discoverability and understanding is more useful on your average business application than optimizing for the speed at which you can take on the next story.  Optimising for speed to the next story now will result in slowing down later due to spending time discovering how to change the code in order to implement the new story.

I value discoverability.  Having worked on many large code bases – finding stuff needs to be as easy as possible.  I understand others may value terseness more.  Design is always a trade-off.  Understanding what is being traded-off is important.  I don’t consider using meta programming, to reduce lines of code that I need to write, more important than being able to discover and understand that code quickly and reliably later.

If your team uses code like the Rails delegate everywhere in their code – then everyone already knows that all searches to discover a method’s usage or implementation should take that into account. Everyone will be doing it and perhaps that is fine – despite increasing the complexity of that search.  The importance here is consistency and providing an element of least surprise.

If a codebase sometimes uses magic – method_missing, delegates, etc – and sometimes does not, then it becomes more of a guessing game when to search for them and when not to.  That is a real cost to maintaining a codebase.

If I haven’t found the code in my search – is that because it isn’t there or is it because it is using some other magical way in order to be declared?

Don’t use dynamic meta programming unless it is really useful.  Most things are cleaner, clearer and easier to change without meta programming.

If you’re breaking the paradigm, use something else to mitigate the loss.  In the case of Active Record, using the annotate gem to help discoverability mitigates the dynamic implementation that makes discovery of the methods harder.

Think!

Think about discoverability.  Think about the cost of making discoverability harder.  Is there something that can be done to mitigate the cost of this design choice?

All design choices are choices.  Weigh up the pros and cons for yourself and with your team.  Discoverability is just one facet of a good code design, but all too often it isn’t even a factor in the thought process.

Discoverability – A naming choice

When reading or refactoring code, it is important to be able to easily find all callers of a method or to go to the implementation of a method.  In static languages, IDEs make this reasonably easy.  However, if reflection is being used, they may still fail us.  It is simply easier to parse a static language to know the types involved and find the right ones to tell us about.  In dynamic languages that is much harder.

In a dynamic language, a key way to find all references to an object’s method call is “Find in Files”.  This means what we choose to name things may make it harder or easier to change later.  It may also make it harder to discover who is calling the method – or even that the method exists.

A unique name

In order to refactor a uniquely named method on a class

  • search for the method name as a string
  • rename

As we know it is unique, this will work.  In fact, you might be able to run a simple find and replace in files instead of looking at each result individually.

This scenario however is unlikely.  At least it is unlikely that we emphatically know that a given method name is unique.

A more likely scenario

In order to refactor a descriptively named method such as full_price_for_tour on a Tour class

  • search for the method name as a string
  • in each search result – check all references to the method name to see if they are in fact using a Tour object
  • if this is a Tour object call, rename the method call to the new name.

This is more work as we need to look at each result.  Hopefully with a descriptively named method the number of usages will not be too high.  Even if the number of usages is high, hopefully all usages of the name will in fact be on the Tour class.

However, we do need to look at each result as this process is potentially error prone.  There could be other method definitions using the same name that we need to NOT rename.  Hopefully there are tests that will tell us if we fail.  And hopefully the number of callers to change isn’t too high due to the descriptiveness of the method so the changes to the callers is clear.

Sometimes the results are less simple

Now imagine repeating the above exercise, but now the name of the method to refactor is name.  Suddenly we may have a huge number of hits with many classes exposing a name method for their instances.  Now the ratio of search result hits that are to be updated is no longer almost 100%.  The probability of error is much higher – the greater the number of hits, the more actual choices that need to be made.

An IDE may help

Immediately the IDE lovers will point out that, using an IDE is the solution.  And yes, it could help.  But IDEs for dynamic languages are generally slow and CPU/memory intensive as the problem is a hard one to solve.  And they won’t always be correct.  So you will still need to employ strategies using a human mind.

Naming things more explicitly can help

A more useful model – even if you’re using an IDE – is to name things descriptively, without being silly.  Things like tour_name and operator_name instead of name may help someone discover where / how a method is being used more easily.

Designing code to only expose a given interface can help

Building cohesive units of code that only interact through a well defined interface makes changing behind the interface a lot easier.  However it still doesn’t discount developers reaching in behind the curtain and using internals that they should not.  So you will still need to check.  Hopefully code that breaks the design like this will be caught before it gets merged into the mainline, but you never truly know without looking.

Reducing scope where possible can help

Knowing the scope of access of the thing you need to change can make changing it easier as it reduces the area you need to look in.  For example, if something is a private method then we know that as long as all usages in this class are updated, then we are completely free to change it. Unless someone has done a private_send from somewhere else…  Or we are mixing in a module that uses the private method from there… Both of which I’d like to think no one would be that silly to do.

Testing can help

Obviously having a comprehensive test suite that calls all implemented permutations of the usage of the method will help to validate a change.  It will hopefully help us discover when we have missed updating callers.  Or when we’ve accidentally changed a caller that shouldn’t be changed.  However if there are name clashes for the new name, it is plausible that it might not give you the feedback that we expect so it isn’t a silver bullet if you aren’t naming things well.

Think! 

Think about naming.  Think about discoverability.  Is there something that will make changing this easier in the future?

Think about the cost of making discoverability harder.  Be aware of the implications of a naming choice.  Is there something that can be done to make it easier to safely refactor away from this choice later?

Can we make things worse? Discoverability is a design choice to make easier or harder.

Discovery – a HTML and JavaScript example

I value ease of understanding in code. I find it helps me to develop more maintainable software that I am confident to change. One of the things that I attempt to optimise is how easy it is to discover how the code works. I ran into an example of this is in a code review recently that I thought was worth sharing. Below is a modified example of what I was reviewing.

The example

Assume the following HTML.

  
  <textarea name=’area1’ id=’area1’ class=’area-class’></textarea>
  <textarea name=’area2’ id=’area2’ class=’area-class’></textarea>
  <textarea name=’area3’ id=’area3’ class=’area-class’></textarea>
  

Along with the following JavaScript in a .js file that is loaded with the HTML page.

  
  function update_counter() {
    // some code to update the element
  }

  $(“#area1”).change( update_counter );
  $(“#area2”).change( update_counter );
  $(“#area3”).change( update_counter );
  

How does the next developer know that there is a counter being hooked up? How does the next developer know how it is being hooked up? How easily could a developer look at the HTML and successfully add a new textarea with the same capabilities?

Based on the above HTML alone, there is no clue that there is a JavaScript hook into it. There may be a JavaScript include in the HTML page that will lead us to the file that is doing the work, but that isn’t something that will be looked at unless there is a reason to look and the HTML isn’t giving a reason to look.

Knowing which file to look at could be even less obvious if you’re using something like the Rails asset pipeline that precompiles and bundles files together as there is unlikely to be a single include for the .js file.

If we knew we were looking for some JavaScript, another discoverability mechanism would be to search for any instance of the textarea’s id or class being used in JavaScript. This could be a little painful.

A simple fix

In this case, a simple fix is to move the initialiser that hooks the update_counter to the specific DOM elements to be coded on the HTML page itself. This highlights to a developer what DOM elements are being bound to JavaScript in the same file as the elements are being defined. This provides a breadcrumb for the developer to follow in order to discover how things are hooked up.

When I have used KnockOut.js in the past, I have had the binding action run on the HTML page to bind the HTML to the relevant JavaScript model to help allow discovery of what JavaScript code to look for.

What if it isn’t that simple

What if the code that is being run to do the binding is a lot more complex?

Another way to do this could be with data- attributes.

  
  <textarea data-counter name=’area1’ id=’area1’ class=’area-class’></textarea>
  <textarea data-counter name=’area2’ id=’area2’ class=’area-class’></textarea>
  <textarea data-counter name=’area3’ id=’area3’ class=’area-class’></textarea>
  

and the Javascript binding becomes

  
  $(“[data-counter]”).change( update_counter );
  

This code now allows the developer to look at the mark up and ask the question “What does the data-counter attribute do?” This will lead them to the fact that there is JavaScript binding to the element.

Even if the developer does not take any notice of the data-counter attribute, copying a row and updating the id to a unique area id and name will still work the same as all the other textareas without the developer needing to think.

The term “Falling into the Pit of Success” is sometimes used to represent things that developers will do by default that will in fact be the correct decision. This is an example of that.

The additional benefit is that the data-counter could be used on multiple pages across the site and it will work the same.

The actual instance

In this case, the counter is visually obvious, so there are some clues for the developer to try to look for the JavaScript counter and how it ties in. However the actual instance that I was code reviewing was saving something to local storage per textarea, and then putting it back, in order to implement a feature. This was even more opaque as it was less obvious it was happening or why.

Value discoverability

Always think about how the next person will discover how the code you are writing now fits together. It might even be you in 6 months time. How quickly can it be worked out? Is it explicit and obvious and easy to do the right thing? How will they discover what you did?

The faster it is to discover the way the code works, the less time will be wasted trying to find it out how it works and the more effective you and your team can be.

The same things don’t always work

Over my years of getting to know Scrum and the agile way of working, I have experimented with a lot of things.  I have found things that didn’t work and I have found things that did.  I’ve kept the things that did work and tweaked them as needed.  They were good tools for me and informed my thinking around how I succeeded using Scrum.

Then I moved jobs.  I took my toolset with me.  And I tried to use my same logic and thinking.  And people heard my words and too often for my liking interpreted them to mean something completely different.  It was very educational and taught me very strongly that The Same Things Do Not Always Work.

Context is King.
If you’ve built up context around certain ways of working then people know how you got there as they were there with you.  They understand.  They emote.  And when you bring those ideas fully formed into another organisation that have strangely not lived in your head for the last couple of years, they don’t necessarily immediately understand or emote.  And this isn’t their fault…

My failure has been in not understanding that my tool set held the tools that I had decided upon by applying the principles that I understood and in order to use the same tools at a new organisation I had to first back away a little and bring out the principles again to see if those same tools would still uphold those principles in this new organisation.

A simple example: Ship when you’re ready
For more than a year I had been working with a team delivering software which was officially shipped on days that weren’t the sprint boundary.  The team were fine with this.  We always aimed to finish before the sprint that we shipped in.  If we could plan it on the boundary we would, but sometimes it didn’t work out that way – and it didn’t matter.  We were completely fine shipping when we were ready – instead of waiting for an arbitrary date boundary for the sprint end.  Everyone was good.  It worked well.  It felt obvious.

Obviously if there was a large amount of work to do and you asked the team to commit, they can’t commit to earlier than a sprint length.  But – if we’re done, we’ll ship – why wait?

And then it didn’t work…
Fast forward to a new organisation.  We have some work to complete.  We have a ship date.  So we discuss and using the previous pattern from my tool set I suggest – if we’re ready we’ll ship.  If we’re not, we won’t.  Somehow this was interpreted down the lines as: we are going to change the sprint length to 1 week and people will deliver by the deadline or else.

What was the simple “if we’re ready we do it, if we’re not, we don’t” turned into an angsty changing the sprint cadence rush to complete.  But that was the organisation’s interpretation of what was, for me, a clear and obvious way of working.

Which made me think
When going into a new organisation – go back to basics.  Say no to all the broken rules – until you know which ones you can safely break without someone abusing the situation.

Another example: Velocity
For several years I had been using velocity and planning stories in a reasonably reliable fashion. The teams I had worked with weren’t highly passionate about velocity but were focused on the work at hand and usually knew what the next sprint or two held and were willing to push their capacity to try and achieve more points in a sustainable way.  The combination of measurement (to aid planning – and replanning every sprint) with knowing what you’re doing for the short term helped ensure that the team was both productive and reasonably predictable.  This was great for building trust with stakeholders who had legitimate concerns about delivery in the past and it also enabled us to go a faster.

And then it felt pointless…
Fast forward to a new organisation. No sizing. No sprints. So I enthusiastically said we should try a little Scrum.  So now we do a little Scrum.  But we don’t use the velocity or plan beyond the current sprint.  And it works.  And no one actually is worried.  And the stakeholders are okay with everything.  And everything is roses.  So why measure?  And why plan?  When you can be agile and make it up each sprint – because the work is still known in a reasonable fashion.  And we’re as successful as is required of us.

The tool set that I brought with me didn’t result in the changes that I anticipated and in fact possible adds little value right now. I suspect many reasons for that.

Which made me think
Scrum isn’t just a framework.  Context remains King.  You can’t just walk in and apply your learning from another context to the new one without understanding the context and working with the people who are in it.  That doesn’t mean you can’t ask questions and make suggestions – but do just that – rather than judging too early.   Agile is about principles – these lead you to the learning and the tool set.  Always go back to the principles and the spirit and validate – particularly when approaching a new team or new organisation with your existing experience.  Unless, of course, you have the remit to cause revolutionary change.  In which case, go wild!

And there is a point
My failures over the last year have fuelled much introspection and learning.  I’ve opened myself up to question myself around my understanding of Scrum and agile.  And I’ve seen very clearly how no one size fits all.  I have found this a powerful learning experience.  Seeing what you know does work not working any more deepens ones understanding of what it is that you’re really doing.  I’m thankful for these new experiences that have allowed me to grow a deeper understanding of what has worked by understanding why it hasn’t worked as well.

I now hope I keep remembering to not apply my tool set too soon in the future in the hopes that I’ll more effectively apply it with a deeper understanding of the actual context.  Or perhaps I’ll find a more universal tool set to apply.

How do you really know enough?

Every now and again we discuss failure to reach a  sprint goal – or just how our sizing is going.  Currently a team that I’m working with is tracking whether their sizes  were higher or lower than they expected so that they can gather data to learn more and possibly understand how better to size stories.  When these types of discussions come up,  inevitably the level of detail specified in the story is held up as a problem.  There is too little.  It hasn’t been thought through and specified
enough.  We spend too much time talking  about details that we discover when we’re in the implementation of the story.

Every now and again – less frequently I think – I get the  opposite discussion where developers push back on too much detail as a solution  is being presented instead of a problem to solve and the developers lack sight  of the business intent.

I find this a continuous balancing act.  It is a balance that I haven’t perfected
between my team and my PO yet.

We recently have had the former issue again – “too little  information”.  In previous retrospectives  we’ve started tracking our sizing.  I’ve  also encouraged the teams that if their understanding of an issue changes – we  either need to re-estimate – if it isn’t in the sprint yet – or if the scope has grown in a sprint story, then a new story in the next sprint may be  needed.  This isn’t often taken up though.

In the most recent retrospective we grouped a bunch of  things under the title “understanding”.  The team acknowledged that maybe they weren’t spending enough time in SP1 & 2 in order to fully understand the work as clearly as the work was in theory obvious.  But when digging into the work requirements emerged and understanding grew and subtleties that were there all along were fully realised. This is all too often blamed on the story.  The spec isn’t detailed enough.  The PO didn’t write it down.  I spent too much time in the last sprint
asking questions and discussing things. This in inefficient…

I must be a little fair – our PO is in Europe – so the turnaround time on questions can be longer than a collocated PO.  This is a challenge.  And sometimes he also goes on leave.  But more on that in a later blog post.

This all made me think of when we were in early Scrum adoption.  I sent an article around that talked about story definition.  There was much amusement in the team around the variation of “a story is a placeholder for a future conversation” in the article. And much ridicule that it was just an excuse to write bad stories and not to spec anything.  We’ve come a long way along the Scrum path since then.  But I still get frustrated every now and again when if a perfect spec isn’t supplied and someone needs to try and work it out – either from how it works now – or from discussions with the PO – that this is bad requirement specification and someone should have spent vastly more amounts of time working on this earlier so that a conversation didn’t need to happen now.  Alternatively – someone should have documented how it worked in great detail in previous years (though it’d be out of date by now inevitably…).  Thankfully
this doesn’t happen too often these days.

So what did we do for the most recent situation?  The team acknowledged that we need to actually use SP1 & 2 a bit more effectively.  We’ve started incorporating three questions for each story that we will be more actively asking instead of passively assuming.  These are
1. What is complicated about this story?
2. What are we going to forget about this story?
3. How is this story going to be tested?

Yes, some of these should be asked anyway – but making these more explicit with the team’s acknowledgement of the problem will hopefully make them more likely to happen in more detail.  We’re hoping this will generate additional conversation that will help increase the awareness of what the work is and the subtleties that we aren’t engaging in.  We shall see.

Do others out there also encounter the “this story isn’t defined enough” complaint?  (After it was sized and taken into a sprint.)  What are
you doing to counteract this argument?  Is it all about using SP1 & 2 more effectively?  And what tips and tricks do you use to enable the conversation to be more effective?

Hopefully one day I’ll find out how best to write a story that would make Goldilocks happy – not too much, not too little, but just right.  And hopefully one day I’ll work with developers who won’t think that the story being a placeholder for a conversation is such a farcical idea.  Though I must admit – it probably would be better receive now.