I recently blogged on discoverability being a naming choice. I talked about how the choice of name may make changing it later easier or harder. What would happen if we start using the qualities of dynamic ruby to do some meta programming – how would that influence a future developer’s capability to discover how the code works? How does this break the model of “Find in Files” discussed in that post.
Let’s start by making it worse
Imagine an Active Record model, Tour, with a column full_price_for_tour in the DB. In a vanilla rails project, searching for full_price_for_tour in the codebase may result in no hits at all. Equivalently, with looking at a caller that calls full_price_for_tour on an instance of Tour, we will not find any reference to full_price_for_tour in the class file for Tour. For new Rails developers this can be very confusing.
The programming model that the active record implementation is helping us with is potentially a useful one – dynamically determining the methods from the database and creating them on the object. But it is harming discovery of how the code works.
So how do we help developers discover where the code is?
In a Rails codebase the annotate gem comes to the rescue. It annotates the model classes based on what is in the DB for the matching table. This allows a developer to discover the list of dynamically created methods that they can call on the model object – and hence what data the model object does expose. This is a Good Thing.
Searching for full_price_for_tour will have a match in the Tour class file – as a comment in the annotations. The developer now knows this method is the column in the DB as the annotations are allowing that discovery.
And then someone gets clever
The Active Record implementation leverages the dynamic qualities of Ruby to do something useful for developers. All dynamic meta programming may not always be beneficial. There are always trade-offs in software design.
Some production ruby code I saw recently implemented something along the lines of:
def self.method_missing(method_sym, *arguments, &block) @hash_obj[method_sym] end
This code was written to provide helpers to access a hash’s properties with method calls. This was a convenience method. And there were some other helper methods defined in this class in order to work out some standard answers to questions that the hash provided. On the face of it, this looks like a clever use of ruby as a dynamic language.
But how does another developer discover all the methods that this class responds to? We need to find the hash definition to discover that. In this case, the hash was from some json from a HTTP POST. The simple question of what can we expect this object to answer to was not codified in the code at all leaving all developers on the team very unsure what the correct method names were.
Going back to the refactoring example. When, assuming this was the Tour class and we were calling full_price_for_tour on this object – how would we find out what the implementation was? First we’d fail to discover one with a “Find in files” type search. Then we would have start spending some time working out why there wasn’t one and what the magic was that made it work. As a developer this is time wasted. Even worse, when the question is “what is the full interface”.
Another clever thing
Some ruby code I can easily imagine is:
def self.method_missing(method_sym, *arguments, &block) method_as_string = method_sym.to_s split_methods = method_as_string.split(‘_’) obj_to_call = split_methods.shift method_to_call = split_methods.join(‘_’) obj = self.public_send(object_to_call) obj.public_send(method_to_call) end
Assuming this code is in the Tour class, we now can call fullprice_for_tour* on the Tour class. This will then get the fullprice object inside this instance and call the for_tour method on it.
tour.fullprice_for_tour would be the same as tour.fullprice.for_tour.
* I’ve changed the method name to fullprice in order to make the code example simpler.
This kind of code is clever. But it stymies discoverability again. When I search for the method fullprice_for_tour I will be unable to find any definition of it anywhere. I now need to investigate the Tour class file in order to determine that there is a method_missing handler, and work out that actually we are calling fullprice on that class and for_tour on the FullPrice class. Now I can find the code.
The simple model of searching for the implementation is broken by this coding style. Searching now becomes an iterative process when nothing comes up. Which takes longer.
And then there are Rails delegates
In Rails you can add to the Tour class
delegate :for_tour, to: :full_price
to be the same as tour.full_price.for_tour
You can even add prefixes
delegate :for_tour, to: :full_price, prefix: :full_price
which now enables
to call tour.full_price.for_tour
This saves a developer from writing
def full_price_for_tour full_price.for_tour end
in the Tour class.
We save writing a method definition. But the discoverability is hurt – particularly when the prefix is used. We now have to do multiple different types of searches in order to discover where full_price_for_tour is defined. And we need to remember to do that. And as determined, there could be multiple different ways in which the method could be defined dynamically.
The cost of discovery should be at least N times lower than the cost to write the code. Where N is the total number of times the code is to be viewed and understood up until it is deleted.
I would hypothesise that the benefit of not having to write a trivial method definition makes the discoverability of the method take at least twice as long. In general my first guess will be wrong. I have to guess at least once more – looking for delegates. But then again, it might be another dynamic way, so I might need to keep on guessing.
The design choice of coding this way results in a codebase that on average takes longer to discover things in. Which means over time, software will take longer and longer to be delivered. As compared to the constant cost at the time of typing a little more.
The constant cost of typing the method definition occurs once – when the developer writes it. The cost of discovery occurs every time a developer needs to understand where the code is defined or from where it is called.
So is dynamic meta programming ever justified?
For the majority of developers, the core type of work is business applications that mostly do CRUD operations. Use cases are driven by actual requirements. Actual requirements are concretely defined. They should have concrete tests that define them. Using dynamic meta programming is almost never required.
Sometimes the code is doing the same thing behind those concrete interfaces. The code may want to be refactored to take advantage of dynamic techniques to reduce duplication and expose a new level of abstraction. This can be valuable when things are in fact the same. If the abstraction makes the system easier to change, this is good. But these changes should be done beneath the concrete definitions of what the system does. The system is a set of concrete interfaces and use cases that have concrete tests. That is what allows us to refactor to a more abstract design below the covers. As the underlying code becomes more abstract, the external interface and the tests calling the interface remain specific. The abstraction should not be the exposed interface of your average business application. The abstraction should not make it harder to discover how the system works.
Many developers value locally optimizing the time that they save writing code. At the same time, they ignore the amount of time they cause someone else to waste when attempting to work out the implementation at a later date. Most code is write once, read many. Optimising for discoverability and understanding is more useful on your average business application than optimizing for the speed at which you can take on the next story. Optimising for speed to the next story now will result in slowing down later due to spending time discovering how to change the code in order to implement the new story.
I value discoverability. Having worked on many large code bases – finding stuff needs to be as easy as possible. I understand others may value terseness more. Design is always a trade-off. Understanding what is being traded-off is important. I don’t consider using meta programming, to reduce lines of code that I need to write, more important than being able to discover and understand that code quickly and reliably later.
If your team uses code like the Rails delegate everywhere in their code – then everyone already knows that all searches to discover a method’s usage or implementation should take that into account. Everyone will be doing it and perhaps that is fine – despite increasing the complexity of that search. The importance here is consistency and providing an element of least surprise.
If a codebase sometimes uses magic – method_missing, delegates, etc – and sometimes does not, then it becomes more of a guessing game when to search for them and when not to. That is a real cost to maintaining a codebase.
If I haven’t found the code in my search – is that because it isn’t there or is it because it is using some other magical way in order to be declared?
Don’t use dynamic meta programming unless it is really useful. Most things are cleaner, clearer and easier to change without meta programming.
If you’re breaking the paradigm, use something else to mitigate the loss. In the case of Active Record, using the annotate gem to help discoverability mitigates the dynamic implementation that makes discovery of the methods harder.
Think about discoverability. Think about the cost of making discoverability harder. Is there something that can be done to mitigate the cost of this design choice?
All design choices are choices. Weigh up the pros and cons for yourself and with your team. Discoverability is just one facet of a good code design, but all too often it isn’t even a factor in the thought process.