The Coming of Age of Serverless Computing
Serverless – it’s a terrible name, not least because all computing programs require a server – there simply isn’t a way to run one without! But the much less catchy ‘cloud-based execution of discrete-units of logic with resources dynamically allocated by the cloud provider, under a pay-as-you-consume commercial model’ is a bit of a mouthful – so serverless will just have to do.
Serverless computing could be seen as the culmination of a couple of industry trends. Firstly, the reduction in the size and scope of application modules – service-oriented architectures have gone micro but now have the ability to go nano and be reduced to the execution of a single function. Secondly, the evolution in Platform-as-a-Service offerings – if we can spin a new database or create file storage without having to worry about the underlying infrastructure, why not be able to execute a code snippet on demand?
As you’d expect, each major cloud vendor has its own offering; Microsoft has Azure Functions, AWS has Lambda and Google has Cloud Functions – even Kubernetes has a mechanism called Kubeless that uses K8s resources to run functions on demand.
What’s the use?
In order to understand the usefulness, perhaps it is best to start with a practical example. Say we have a number of software or hardware monitoring solutions like Datadog or New Relic. These packages will regularly generate log entries about our systems’ health. Say we want to collect logging data from many sources into a central repository, so we have a unified set of alerts and dashboards for our complete IT estate. The immediate difficulty we face is that every system generates log entries in their own format, and the kind of centralised monitoring we want requires that all logs follow a similar pattern. We need to build several small translators from proprietary formats to our common central format. In fact, as many translators as sources of logging data we have.
Each translator has a simple input and output, runs a self-contained unit of logic with a very narrow scope, for a short period of time. And we know it must be run on demand for every logging line. Having to build, deploy and monitor a complete microservice to wrap around each of these string translation functions feels like overkill. Instead of that, we could write our translation routines using a common programming language (e.g. C#, Java, Node.js, Python, Go), and deploy to our cloud provider to run on demand.
All it takes is uploading the body of the function, choosing the types of events that will trigger its execution, configure parameters like maximum resource consumption or execution time, and the relevant cloud provider will take care of the rest. They will detect the start conditions or events, spin the environment, run our function passing the right input, gather the output and send it wherever we configured, and finally shut down leaving no trace. At the end of the month, we get a bill that reflects the number of executions and overall running time for our log format translators.
As the example illustrates, this serverless execution of a Function-as-a-Service can be very useful in certain cases. It is ideal for self-contained, stateless processes that can parallelise perfectly and run on-demand relatively infrequently. For example:
• Email notifications when a client creates an account or changes a password
• Image resizing or thumbnail creation on demand
• Pre-processing of event streams or logs before analysis
In each of these scenarios, the serverless computing model has a number of core advantages over traditional application deployment models.
1. Pricing. We are billed only for the actual number of times our function executes, and for its overall running time. If the function does not run at all, we are not charged for the idle listening time.
2. Scalability. We only have to do the same work to execute our functions one thousand times or one million times a week. The cloud provider takes care of all the complicated detail of how to manage infrastructure and resources to make it scale.
3. Quick deployments, with faster time to market. To make our function run we just have to upload and configure a few parameters. This allows developers to focus on the application, not on the infrastructure.
However, nothing is perfect, and we must also understand some of the pitfalls of a serverless computing environment:
1. Vendor lock-in. As serverless users, we are asking a cloud provider to run our functions with no regard for how that happens. There is inevitably some lack of interoperable standards, varying maturity and support for programming languages in the market.
2. Debugging and troubleshooting. We are running logic in someone else’s machine, inside an environment with a lifecycle we do not control. That means we have to debug using log ‘breadcrumbs’ or using complicated remote debugging mechanisms.
3. The length of functions. Functions need run to completion within a limited time or the cloud provider will terminate, and cloud providers put a cost premium on long-running functions.
4. Functionality has to be relatively simple. It should have no dependencies on external APIs that may introduce expensive latency or third-party libraries that may not be available in runtimes where our functions execute.
In summary, the issue is not whether to use serverless computing at all but to choose the right applications for it and be mindful of being caught out by the inevitable pitfalls.
For more information about serverless computing contact Jorge Garcia de Bustos at Godel Technologies https://www.linkedin.com/in/jgbustos/