I first heard this phrase from Eric Elliot in one of our 1:1s at Po.et. It's was coined by Kent Beck, creator of Extreme Programming and one of the seventeen software developers that signed the Agile Manifesto.
It's a very simple rule of thumb that will take you very far in software engineering.
for each desired change, make the change easy (warning: this may be hard), then make the easy change.
I had been applying this approach to software development for a decade before hearing this phrase, yet when I did something clicked inside me. Never before had I found words to describe this technique so succinctly.
In practice, it boils down to cleanly separating code changes, refactoring first and implementing afterwards. If applied correctly, in some cases it can lead to delivering a change with a single line of code, or even a configuration change.
This method is at the very heart of Agile Software Development.
Back in 2001, when the manifesto was signed, git hadn't been invented yet (initial release 2005), AWS didn't exist (launched 2006), GitHub wasn't even an idea (founded 2008) and dial-up still pretty much dominated the world. You can guess continuous delivery and daily code reviews weren't even a dream back then. These ideas even seemed pretty extreme at the time.
Fast forward almost two decades and all of those things have become standard in the industry, to the point we can implement and deliver the two steps of the method all the way from our computers to production in a matter of hours.
Today you can sign up for a GitHub account to host your source code and streamline code reviews, a Heroku account to run your application that will automatically deploy merges to master for your API or a Netlify account for your frontend that will even give you deploy previews and a CircleCI account to run your linter and automated tests for every PR and even prevent them for from being merged if a test fails, and all of that for free and in a matter of minutes for sign up and a few hours to a couple of days for wiring it all together.
Now, more than ever, this method is a pillar of successful software.
But how does it look like in Real Life Programming™? Let's dig in!
A Real Case
Here's an actual working example of a big-ish refactor I undertook for Po.et.
The task: migrate all cases of encryption and decryption in our public API away from Vault.
The first step was to do some research and decide what to replace Vault with. This part had its challenges, but it could be tackled independently, initially building a small PoC in isolation, separate from the API, and later adding the results to the actual codebase, as two pure functions in a helpers/crypto
helper file.
import { createCipheriv, createDecipheriv, randomBytes } from 'crypto'
const algorithm = 'id-aes256-GCM'
const ivLengthInBytes = 96
export const encrypt = (text: string, key: string): string => {
const iv = randomBytes(ivLengthInBytes)
const cipher = createCipheriv(algorithm, Buffer.from(key, 'hex'), iv)
const ciphertext = cipher.update(text, 'utf8', 'hex') + cipher.final('hex')
const authTag = cipher.getAuthTag().toString('hex')
return ciphertext + '|' + authTag + '|' + iv.toString('hex')
}
export const decrypt = (ciphertextWithAuthTagAndIv: string, key: string): string => {
const [ciphertext, authTagHex, ivHex] = ciphertextWithAuthTagAndIv.split('|')
const decipher = createDecipheriv(algorithm, Buffer.from(key, 'hex'), Buffer.from(ivHex, 'hex'))
decipher.setAuthTag(Buffer.from(authTagHex, 'hex'))
return decipher.update(ciphertext, 'hex').toString('utf8') + decipher.final('utf8')
}
So far so good: the complexity is completely isolated, no risk of breaking anything. But the new code isn't being used yet!
Next step was to find out were we'd need to replace Vault-code with helpers/crypto
code.
When I started working on this task I knew we kept all the code responsible for interacting with Vault in a single file and, god bless TypeScript and WebStorm (or any modern IDE), I could just Ctrl+Click on the Vault.encrypt
and Vault.decrypt
function declarations to find all references to them.
What did I find? Encryption and decryption functions were called in ~10 different places. That's not that bad, but looking at the code I started feeling demotivated. So many places to refactor code plus a bit of mix with business rules made it a bit overwhelming.
I started making those ~10 code changes in my mind, as if walking 10 different roads at the same time. Each had a possible turn here, a branch there. The change should be relatively simple, I reasoned, yet I felt uneasy. What if I made a mistake in one of them, would I need to walk every path again? What if I was forgetting something? The combinations were too many to predict, and I didn't want to wind up having to tell a user "Hey, we did an oopsie and lost your private key. Our bad!".
So I asked myself: what can I do to make this easier to reason about? One question led to the next:
- Can this be refactored without changing any actual functionality, so everything stays the same but we wind up with a code that's simpler to migrate?
- Why is encryption/decryption being called in so many places?
- Why are we encrypting in the first place?
To increase security.
- How are we increasing security with encryption?
By not leaving the plaintext private key in the database, nor sending unencrypted plaintext to the database in transit.
- So we only care about securing the database in transit and storage? Does this make sense?
Doesn't matter. Changing that would be a feature, not a refactor. It's out of scope right now. It would require its own research and implementation, and would probably benefit from a simplification of code anyway.
Alright, so it's a database matter then.
- Do we already have a component that's completely responsible for database interactions?
Yup, the AccountDao.
Cool, so maybe encryption/decryption of account fields should be a responsibility of the Dao, acting as a sort of middleware between the database and the business logic.
- Is there a similar thing going on already with the AccountDao?
Yes, it's responsible for pseudo-serializing and pseudo-deserializing objects to/from the database: it transforms the Id field, which is an UUID, from and to the binary format that is used for storage in the database (due to it being more performant in speed and storage space), to/from a standard UUID string.
So there's a precedent of a similar functionality being implemented in the AccountDao.
This is basically following a Root Cause Analysis for the problem "This feature feels overly complicated", which lead to discovering the cause: it had been implemented with the wrong architecture in the first place, so changing it was harder than it should.
The Solution
Make the change easy by refactoring first, so encryption/decryption is only called from one place instead of ~10, then make the easy change, modifying a single line of code.
Making The Change Easy
Once you have found what was making the change hard and made a plan, it's time to start working on the refactor.
This half of the method can be implemented and deployed smoothly and stress-free since it's basically moving code around, not changing the behaviour of the application whatsoever. If you have a good set of tests that assert the output of the application for many different inputs, the tests passing should be enough indicator that the refactor is mergeable and deployable.
If the test suite doesn't give you enough confidence to merge the refactor then it's not accomplishing its goal. In this case it's best to start by adding tests or improving the existing ones before delving into the refactor.
It's also important to focus the refactor on what's needed to make the change easy, to keep the feedback loop as short as possible. Potential changes to the architecture that would be improvements but don't directly impact the change at hand should ideally be planned and implemented separately.
Making The Easy Change
If the previous step was done correctly, by now the desired change should feel obvious and intuitive, both to implement and to review.
In our real life case the actual change is made in a single line of code:
const encryptApiTokens = async (tokens: ReadonlyArray<Token>): Promise<ReadonlyArray<Token>> =>
// Promise.all(tokens.map(tokenObjectToToken).map(Vault.encrypt, Vault)).then(tokensToTokenObjects) // old line
tokens.map(tokenObjectToToken).map(encryptWithKey).map(tokenToTokenObject)
Basically .map(Vault.encrypt, Vault)
changes to .map(encryptWithKey)
.
The implementation is so simple that we can focus on whether everything's ready for this change, rather than whether this change is correctly implemented.
Return of Investment
It took about 1 hour to go over this thought process and come up with the right architecture and refactor plan.
Actually implementing the refactor took probably another hour or two, and was pretty straight forward thanks to having already thought a plan and being confident in it, and to a good battery of unit and integration tests.
It may feel tempting to avoid this upfront investment of time and effort, especially if we're feeling pressured to deliver something quickly by a boss, the business in general or just our paranoid selves. We may feel the need to ask for permission to make such an investment, since intuition may make us feel that we're taking a higher risk or delaying the roadmap on a whim, when in reality the exact opposite is true.
Developers do not have to justify testing and refactoring to management; because those disciplines increase efficiency and productivity. Indeed, it is the lack of such disciplines that ought to require justification. I doubt any could truly be found.
I know, from the experience of not having followed this approach and jumping straight into implementing a feature or "big" refactor so many times in the past, that the other road would have taken much more time, mental energy and would have left me with greater uncertainty regarding possible bugs or code paths I may not have thought of.
Code review also benefits greatly, since the task can be broken down into smaller steps and delivered as single-purpose PRs that are easier to reason about.
For example, in our case, instead of tackling the whole migration in a single Pull Request, we first open one that moves the responsibility of database field encryption to the dao, and then one that actually replaces the encryption method, to the point that the actual change is implemented in one commit that changes a single line of code.
Conclusion
In this real-life case what we actually discovered is that there was an underlying issue with the architecture of the application. This issue only rose to the surface when we had to deal with related code, which led us to question the whole design.
This is not a special case, though. It's completely normal. It is neither a good nor a bad thing, it just is.
In real life work, there's dozens of forces pulling and pushing us into different directions all the time. Sometimes there are deadlines. Sometimes, meetings. Sometimes, we can't give our undivided attention to a single task. Software is built by humans. Software companies are built by humans. We make mistakes.
And all software is alive, in constant flux. No matter how much we plan, half of the scope discovery unavoidably happens during implementation.
Perfect software only exists in laboratory conditions and is never used in production.
The ‘Make the Change Easy, then Make the Easy Change’ approach allows us both to improve how we tackle challenges and to evolve the architecture of the whole application, and this is why it is such a good tool to have in your toolbelt.