ArticleApril 18, 2019 · 5 min read time
Serverless is an architectural pattern that leverages maximally managed services as components of a system and stitches them together with small bits of business logic code that is also run in a managed service. The most typical type of service to implement the business logic is generally called Function as a Service (FaaS), but there are a multitude of ways to implement this custom compute resource.
It’s been nearly two decades since Marc Andreessen first uttered his poignant analysis that “Software is eating the world”. It still is, and seems that the pace is accelerating. So my assertion is that serverless software specifically will eat the world. The question that arises is why do I think serverless is the winning way of implementing systems in the future. Well the answer is that there are several reasons and that is what this post is about. This explanation is greatly simplified, but this is the basic concept.
Reason 1: Maximising the use of managed services means using proven components
Using managed services means that you can trust that large chunks of your system have already been production proven with workloads that are most likely much harder to handle than yours. I propose you let someone else optimize the management of standardized parts of your system.
Reason 2: Serverless systems are resilient
A fun anecdote of serverless resilience is when a partner of ours was taking a serverless service that I co-wrote to live under the same support process that all of the rest of the systems are, he said: “It’s hard to write a support manual for this system since there aren’t really any components there that can fail. There isn’t anything to reboot or reset”.
That is also a common complaint of serverless systems, that the failure modes tend to be more complex. The way I see it is that serverless systems produce the same amount of complex failure scenarios as any other type of system, but their proportion of all failures in the system is bigger since all the stupid failures have been eliminated. I propose you stop paying for managing the stupid types of failures.
Reason 3: FaaS eliminates whole classes of errors
Functions are short-lived, in the order of a few seconds and at most a few minutes. Therefore it is much harder to write a memory leak or a deadlock or a livelock that would cause a fatal error in a FaaS service. A memory leak would have to be pretty severe and sneaky to get triggered in production, but not during development. Furthermore smaller functions have less need for multithreading, since you could just as well run several copies of the function in parallel.
Even if you implement your compute in some other serverless, non-FaaS manner, the way these platforms are implemented mandates that you are resilient against your components being shut down unexpectedly and the best way to make sure the resilience is there is to keep shutting down your components regularly. I propose you stop getting bitten by memory leaks.
Reason 4: Serverless architectures are scalable
FaaS and the other managed services used in serverless architectures mostly implement practically infinite scalability and perhaps more importantly scale down to zero in the other end. This means that you can try out new services with minimal risk if the service fails to take off. You can just take note that you’ve just learned something and move on to implement the next service that hopefully will have more success.
I propose you stop tying your system performance to static pieces of infrastructure and let it live and evolve however the users need it to evolve.
Reason 5: A FaaS function is the right size of a deployment unit
A FaaS function, when done correctly, is small enough that any change, even a complete rewrite, can be achieved by one team in one development cycle - typically one to a few weeks.
The knowledge about the component and its surroundings map nicely to one team. That team can be responsible for the function’s functionality from cradle to grave.
Small deployment units minimize risks and make it possible to keep moving the whole system forward one small piece at a time. Small deployment units also enable short lead times to take new functionality to production. I propose to pay up your technical debt constantly and in small installments.
Reason 6: Event driven components are composable
Serverless architectures heavily encourage an event driven model for the whole system. As a result, the events define natural interfaces and are then easy to combine in different ways or even productize into neat little packages while still remaining loosely coupled. Events are also a good natural fit to model real world processes helping communication and design.
I propose you let your teams design systems out of easily composable components.
Bonus: Four Things of What serverless is not
Serverless architectures do not reduce the need for appropriate levels of testing. It will however enable cheaper test environments to be spun up as needed.
Serverless will not reduce the need for documentation. You will be able to implement a lot more functionality much faster and you will have to think carefully about how these functionalities are discovered and consumed efficiently.
Serverless will not reduce the need for managing dependencies carefully. Again the faster development speeds will hurt you if you are not careful about how your components depend on each other and external systems or libraries.
Serverless will not implement monitoring for you for free. There are a lot of great tools out there to achieve better service transparency and more and better tools are sure to follow. Serverless systems will be more distributed and getting a holistic view will require some effort.
So that’s it then
Serverless software will eat the world. There are still some hurdles, like tooling and monitoring and some workloads that are not a good fit right now. However that kind of thing has been true for every new technology to ever come out.