The journey from bare metal to virtual machines to containers to serverless is one where the business is less and less responsible for setting up and maintaining the environment. Its staff can spend more time writing code and business logic and less time pulling the levers that make it possible.
For the end user, serverless is entirely abstracted from the underlying infrastructure. You're not responsible for either a bare metal server or a virtual machine.
The question on your lips might be how serverless computing is different from containers – the cloud solution pioneered by Docker that bundles up files and libraries into discrete containers which can run on any Linux infrastructure.
At the technical level, there are lots of differences. But from a user perspective, the main difference is one of scaling.
Containers simplify the deployment process – but when you want to scale up your apps, they present problems. Serverless, however, auto-scales.
It's a bit like being a painter and hiring a studio. With serverless computing, you hire the room, the brushes, the easel and everything else you need – and you know that the studio manager will take care of wiring, rising damp and any other problems.
If studio rental worked like serverless computing, it would vanish in a puff of smoke the minute you left the room. Resources come to life – or "spin up", to use the lingo – only when required.
This means you're never paying for anything you don't use. With containers in a public cloud, however, you're paying a month's rent upfront.
Advantages of serverless computing
Serverless computing is cost-effective. Its auto-scaling capacity means you're only charged for the resources you use, unlike previous iterations of cloud computing where you're paying for idle time too.
It's an easy and quick way to write and deploy apps, which means you can get your creations to the marketplace at a satisfying pace. You can pour all your energy into developing apps while the provider maintains the underlying infrastructure.
This user-friendliness is also connected to its polyglot nature. It's the Switzerland of cloud computing. You can write your code in whatever language you like.
Finally, serverless provides high availability. There's a team of technicians behind the scenes ensuring that the server is always running.
But like everything in life, there's another side to the coin.
Disadvantages of serverless computing
A big risk of serverless is vendor lock-in. Different vendors provide different services – so you're not enjoying the flexibility of a multi-cloud solution.
Serverless is built for speed. However, if you're deploying a long-running app, you may find that the bill dwarfs that of a VM or dedicated server.
Latency can be an issue, especially when handling a function for the first time.
Serverless instances are perpetually self-renewing – little digital phoenixes that spin up entirely new versions of themselves each time. This can make debugging a delicate operation.
Finally, serverless instances time out. Executing code can be a race against the clock. This isn't always a problem but can be if the app you're developing is especially involved.
Who's it for?
Serverless computing can be used in a variety of use cases:
- App and website backends
- Asynchronous processing – behind-the-scenes tasks that don't interrupt your programming flow
- Building RESTful APIs
- Continuous delivery (CD) and continuous integration (CI)
- IoT data processing
- Security checks
- Trigger-based tasks (e.g. a user signs up on your website, which triggers a database update, or a security sensor triggers a push notification)
- Video/image manipulation
- Writing polyglot (multi-language) apps
Why did Amazon ditch serverless?
Amazon made a splash in the tech world when it announced, in May 2023, that its video quality analysis team was ditching its serverless micro-service architecture and moving back to a "monolith". The switch allegedly saved Amazon 90%.