Lean Backend Miniaturization
Instead of inflating infrastructure complexity, make things smaller and easy to reason about
· 8 minutes read
When it comes to backend development, the playbook recently looks as follows:
- while bootstrapping, settle for a technology where your can hire as cheap as possible
- then use whatever technology your team is already comfortable with
- if you need to handle more load, throw more servers at the problem
- with more and more servers, maintaining that zoo gets more complex
- start hiring for some "DevOps" people to keep things more sane
- use some "Infrastructure-as-code" tools for provisioning and deployment
- multiply efforts once the inevitable rewrites occur and stuff has to be kept running in parallel with load switches, blue/green deployments, ...
- actual feature deployment velocity plummets towards zero, while infrastructure costs goes up and up and more people are needed just to keep things running
- discussions about infrastructure provider switches for cost reduction arise, further amplifying the point above
Sounds familiar? When this story unfolds, everyone involved gets frustrated and feels helpless, because this seems to be the way it is. But actually, there is a better way.
The initial point of failure is to choose technology for easy/cheap hiring. Let me tell you a secret: Developers can learn new technologies from scratch in mere weeks. You want to build upon that for years. When a better technology choice allows you to go a longer way faster, you are trading a few weeks of initial learning curve for months or years of leeway before the scaling pains hit you.
Even better: using some more advanced technology, you can tap into a pool of highly skilled developers who love what they do. And motivation/satisfaction for existing developers also rises when they are allowed to learn new and better things. All of this amounts to faster/better product development in significant amounts!
Now to the more technical aspects.
How to Backend
I highly recommend to read and understand the Blub Paradox. Yes, You can build anything with anything technically. Why not build a neural network using PHP/MySQL? It is possible after all! The point is, there are far better tools for a specific job, but still most people are completely ignorant of these implications or underestimate how much of an impact technology has on a business.
But what is a "right" tool actually? Things that your developers currently are comfortable with? Let's avoid that fallacy now and focus on typical "backend software".
Backend is typical data crunching code. Fetch data from somewhere, push data somwhere else, persist/load it, process it. It requires computing power, storage and network. The amount of business logic often is surprisingly small here, most actual working hours are on tuning, optimization and scaling. If you picked PHP or Ruby to build a backend service, you will very fast end up implementing different caching techniques, load balancing between multiple instances of the same service, establishing complex deployment workflows.
Imagine your backend technology magically is so fast and simple, that you have no need to even think about caching or spawing multiple instances? When deployment is as simple as just copying a file? With no thoughts about dependency management on the server? And confidence that things will work as intended without carefully watching traffic and testing everything after a deployment? Sounds too good to be true? Actually, this stuff exists.
What now follows is the result of many years of experience, always having the luxurious position to test promises/results on production right away.
Enter Rust, a modern systems programming language. Emotional topics like syntax or cargo culting aside: it is the perfect tool for all things backend! Why?
- it is statically compiled with support for static linking and
cross-compilation. You can compile something, get a single program file as
output that you can copy on a pristine server and just run it
./my-service). Deployment complexity is nearly non-existent.
- it is extremely fast and efficient, like C/C++. Handling hundreds of thousands of requests on a single dirt cheap VPS server is quite normal.
- it has a powerful static type system that guarantees things like prevention from data races, and prevents whole classes of bugs at compile time. Even the majority of typical automatic unit tests can be skipped alltogether. This is a major productivity boon that pays for itself just after a few days/weeks!
- Since Rust carries no runtime, like a garbage collector, the code is highly portable. It can be embedded in other tech stacks or even compiled to WASM and power your frontend code.
- Bonus point: since there is no garbage collector involved, there are no unexpected latency issues at all. These types of issues you have in Java or Go will not show up.
There are many reasons why companies like Microsoft, Mozilla or Dropbox are doubling down on it after all.
Assuming you are not the scale of Google or Facebook, where once-in-a-million race conditions occur multiple times a day, chances are that you will never exceed the mark of having more than hundred thousand API calls per second, ever. This is actually quite liberating, since you can trust on a single cheap backend server to power your business and shortcut the need to maintain increasingly complex infrastructure and all the hassle involved.
In most cases, your probably also do not have a serious problem when your service is unavailable for the fraction of a second when you deploy something (copy the new application executable, stop the running service, start the new one, in one go).
Most cases of technical complexity are eliminated right away. Your backend should only be as complex as your actual business is.
But the story gets a bit better than that! Business Complexity can be reduced when your developers leverage a powerful static type system like the one Rust has. Being forced to code things very specific and abstract at the same time, patterns will be revealed, that can inform you that processes might be very inefficient, have human bugs or edge cases, ... things you took for granted until then.
When these events occur, your are beginning to reap benefits beyond cost-savings/performance/productivity. Like a semi-automatic system, your business processes themselves will improve and/or expand while your motivated developers try to improve things. These evolutionary improvements will probably never occur with big, complex, brittle software stacks when everyone is afraid of making changes.
Many (but not all) benefits of Rust can be collected with other technologies, too.
For example Go has a significantly shorter learning curve (a matter of hours to get productive), has the same trivial deployment story like Rust and is at least faster than common scripting languages.
But Go does not have a powerful static type system, so many types of bugs can and will occur, forcing you to waste much more developer time on writing tests or even do deployments more carefully. And having a garbage collection, embedding Go stuff in other software stacks way harder, if possible at all, aside from having to watch memory and tune things.
And while the code will be easy to read (and probably will stay this way), the sheer quantity of it will explode, increasing maintenance costs through the backdoor.
While I do love other things like Elixir or Ruby, I would never recommend any of them again for serious business backends. They are awesome to quickly hack together interactive stuff and see first results, but if you plan to maintain and scale things for years to come, while keeping productivity high, despite business complexity increasing constantly, shy away from it.
Exception: you are more valuable if your developer headcount is higher, than by all means: go for it!
Second Order Effects
Okay, you settled on Rust to build your business backend(s). What can you expect?
- A small team of developers will produce value with predictable speed (instead of drama why things slow down/get more expensive/need more headcount)
- you will outpace your competitors within months when they start to slow down noticably
- your energy footprint (think: environment) will be as minimal as possible
- your dev departement will transform in a business driver (instead of a cost center) the further they get
- independence from cloud vendors: statically linked binaries do not require anything special, host them where you like, and moving is trivial.
So instead of thinking "bigger", try thinking "smaller". Your competitors will not understand your unfair advantages. Stay LEAN!