Good code is a great thing. It’s like money in the bank. It gets results. Developers like to work with it. Good code is easy to add features to. Good code is easy to understand. Everything’s better when the code is good.
While working with junior developers, I’ve found myself trying to explain what we are striving for when coding.
In honor of Joel Spolsky’s ultra simple hiring criteria, I have narrowed my view of good code into two basic criteria that form an umbrella for most good coding practices.
Good code is:
- Code that gets the job done.
- Code that is maintainable.
Getting the philosophy down to two criteria makes it portable. These two criteria easily fit in your head while you work. They easily come to mind when you are reviewing what you’ve written.
The first criteria, “gets the job done”, is essential. You need to keep working the code until the job is getting done.
The second criteria, “is maintainable”, is a mixed bag of properties that will affect how hard or expensive it is to keep the code running and adapt it for future needs.
When you look at these two criteria together, they are opposing forces. Criteria one, “gets the job done”, is an additive force. Concentrating on the getting the job done, we add code unchecked. While criteria two, “is maintainable”, is largely subtractive. While concentrating here, we push for simplicity, agility, and reducing the mental load required to work with the code.
Code that gets the job done
If the code is a math function, the answer should be right. If we want the answer quickly, then the code should be fast. This is pretty straight forward. Get the job done.
Automated Acceptance Tests
In practice we need to prove that the job is done. One way to do this is to have a tester or product owner look at the result and give a thumbs up. However, looking closely at results may be interesting once, but this validation will need to be done again and again with each change and deploy. Repetitive tasks like this are expensive, error prone, and steal the joy out of making software.
Yet, validation is part of getting the job done.
So, I believe that automated tests are an integral part of good code. A good code base performs it’s own validation.
Many people hold this same view and we hear it all the time. Yet it’s a lesson I see being learned again and again. And it’s a point I’ll hit in more depth in another post.
The testing scope that is most important to focus on is the acceptance criterial scope. If the acceptance criteria are well defined, then we don’t care about anything else. There are exceptions to this, like when you can reduce overall test complexity by testing sub-components, but in general, testing outside of the acceptance criteria is dancing that doesn’t necessarily need to be done, so why do it? If your tests prove the acceptance criteria are met, then there is nothing left for testers to do. This is great, because it allows your testers to move into a quality assurance role. They become consultants for your developers, predicting edge cases and common failure modes to include in the acceptance criteria and tests. This is good for requirement/story quality as well as the code.
Code that is maintainable
Like it or not, our code is almost never fire and forget (unless you’re a consultant, but that’s for another post).
We will be back in the code to fix bugs.
We will be back in the code to understand what the code will do in new case X.
We will be back in the code to add features.
Maintainable code is organized and reads well.
It’s also easy to extend and manipulate without breaking it.
These are very good things for productivity and our sanity.
However, these are also things that get sacrificed as we add to a code base. Due to time constraints we cut some non-essential corners. As we address edge cases our code model starts to strain and we introduce hacks.
Maintainability is something that must be fostered over time. And life gets bad if you don’t foster it. Hopefully we are reflecting and tracking the pain caused by maintainability issues. This pain is the driving factor to improve maintainability and pain is often poorly measured. There is a very real cost incurred by issues with maintainability. While they aren’t on the business’s product backlog, the business will be paying for them. It’s in everyones interest to get them addressed, so either make the business case for getting maintenance related development onto your backlog or build maintenance improvements into development estimates.
The alternative is a dark future (another post).
Following are some of the factors that determine maintainability.
Automated Acceptance Tests
Automated testing is a must for maintainability. At Facebook they say, “Go fast and break things”. That’s awesome if you can tell when things break. Automated tests are your safety net.
And with a safety net you can do dangerous things fast. If the coverage is good you can refactor critical parts of code with confidence. You can deploy with confidence.
The key is automation. Anything that a tester needs to do to feel comfortable about a deploy, put it in automation. If any testing is left as a manual process, this will raise the bar for a deploy. More difficult deploys mean there ispressure to deploy less often. Deploying less often means deploys will have more accumulated changes and therefore more risk. Over the life of a product these factors compound. The cost of a manual testing process is many many times the cost of doing it once depending on the life of the code.
While push button automation of tests is good, Continuous Build and Continuous Deployment are where you want to be. Issues surface earlier. They lower the manual steps in bringing a product to deployment.
These benefits are reflections of the benefits of traditional automated testing. Continuous Build and Continuous Deployment lower the cost of code ownership and they allow a project to scale in complexity without growing the development and operations team headcount as fast.
“Everything should be as simple as it can be, but not simpler”
Paraphrased from Einstein
Simplicity is the main subtractive force in the maintainability camp. “Get the job done … and nothing more.” Remove what isn’t needed. If there is a simpler way to get a job done, it’s probably better.
Simplicity pays off big. Simple code is easier to understand, has less lines of source to contain bugs, and is easier to test. There are many temptations while coding to make things more complex and they are rarely worth it.
We commonly drop simplicity for extendability, to use a cool pattern, optimization, or preparation for a future capability. This loss of simplicity should only by paid if there is a payoff now.
For some reason we always want to slip in code for future functionality into a current task. However, code for the future or code for optimization before it’s needed is usually toxic. Unused functionality is usually not vetted as well, so it’s dangerous. Then, sometimes we never need the functionality. Sometimes the target has moved and we did work that we didn’t need and now takes work to remove. Even if the functionality you coded ahead of time is eventually needed, it would have been just as easy to add it later as it was now. Plus we know more about the problem in the future, so our early implementation may need redoing anyway.
Making code dry (don’t repeat yourself) is a common target for simplification. Identifying duplicate code and creating a single version that is used in place of duplication is great for maintainability. If you want to change the code, you do it in one place. Anytime related logic is separated, the threat of breaking something goes up.
At the end of the day, code should be simple in your head. It’s not about lines of code or any other standard. If the section of code you are working on is easy to put in your brain and think about, then life is good.
Most of what follows helps code fit in you brain.
Architecture is the arrangement of software components, the chosen technologies, and the flow of a given system or program.
Choice of architecture affects maintainability because it determines how easy the application is to understand. It determines how much work it takes to implement a feature and how the code will be organized.
Choice of architecture also affects whether performance issues will be looming when you scale, and how easy the issues are to fix.
Finally, choice of architecture affects ease of testing. Some architectures are built with testing in mind, so tests can easily integrate and validate behavior.
Separation of Concerns
Separating concerns allows you to focus on one cohesive understandable part of the application at a time. A single part of code will only do related tasks toward a single goal at a constant level of detail. The lower level details are hidden. The higher level context is hidden. The code part doesn’t do unrelated things.
Separation of concerns is essential to keeping code understandable and therefore maintainable. It decouples the parts of your code. It keeps one module’s complexity or hacks or magic from bringing down any other part of the application. Without it, complexity grows exponentially and we end up with an untouchable code base that no one fully understands.
Good Object Oriented design is great and separates concerns well. However, sometimes just chopping out subroutines of a long piece of logic get’s the job done in a way that’s easiest to understand.
Naming and Good Metaphor
The smoothness and robustness of well named code has a magnifying effect on all other coding tasks. And we all have someone on our team who is a terrible namer.
Naming is a strangely difficult part of software development. You’ll be coding smoothly along and then hit a wall trying to name a simple temp variable, or picking a class name, or even worse an application name. Everyone suffers over naming and it’s great source of “arguments you wish you never had” between developers.
Code with good naming tells a story. It’s bones tell you what they do without explanation. For example, it’s much easier to work with a variable named “unfilteredResult” than “temp8”. Encountering “temp8” will require you to look back at code to figure out what is assigned to it. It’s easy to misinterpret what “temp8” even represents. You might assign some result to “temp8” that needed to be filtered. If you blink, you might miss the filtering during initial coding, and odds are much stronger that you’ll miss this in the future when doing maintenance or adding functionality. With a variable named “unfilteredResult” it is more apparent if we were forgetting to filter it before using it. When a variable, function, class, or application is named after what it is or what it does your logic sings out from the code.
Good metaphor supports code like naming but at a higher level. For instance look at the Observer pattern. When taking on this metaphor, the story is set. This is observing that. That is doing some action. The actions must have some information about them. There must be a way to control who is observing whom. When the metaphor is chosen many parts of the system are implied as well as the interactions between them. And since the metaphor comes from the natural world, it fits in the brain so much better. The code is much easier to understand and working with it becomes intuitive.
Some useful metaphors are very abstract, but the bread and butter of metaphor in code comes when you model your code after the real world problem that the code is built to solve.
If you are writing code to control a real world elevator, you have code objects for the elevators, the floors, the buttons, etc. The things you can do in code to the code objects mimics real life objects and actions.
Standards, patterns, and common frameworks are great, because they come with built in support. There is free documentation. There’s a community. Developers may have previous experience with a standard, so the standard is already in their head and they can work more fluidly.
Some standards are terrible, but usually if a standard has become popular, it is because it has good code properties. It’s well designed and people enjoy working with it.
Finally, The code needs to flow smoothly in your brain.
A good piece of code is a pleasure to read. It’s no more complex than it needs to be any where you look. The parts are organized logically. With concerns separated, you don’t have to hold multiple scopes and contexts in you head at once. It uses names and metaphors that are intuitive and maybe familiar. It says what it does right in the code.The code is so clear that there is little need for documentation (why repeat yourself).
Readability comes from the high level concerns right down to the little details like whitespace, indentation, and line breaks. It comes down to organization of private and public fields and methods. It comes down to whether you declare variables ahead of time or inline. It comes down to method length and whether to return a method result inside a conditional. For each of these topics there are popular conventions and good reasons to break them. In my mind what matters most is the readability. If you break a convention and your team can read it better then go for it.
All code starts off rough. We get a piece of code to first work logically, but it then needs to be massaged to meet standards. This massaging is known as refactoring and it is a vital skill for making code clear and maintainable. Most of the refactoring actions are related to readability.
Conversely, code that is not very readable should be suspect, even if produced by a guru. Code with poor readability was not a loved child. Poor readability hides bugs and promotes bugs. It shows poor craftsmanship.
But at a high level, the “gets the job done and is maintainable” check for code is a great starting point for discussion in code reviews, before deploys, etc.
I want to know what you think.
Email me your thoughts at firstname.lastname@example.org