DE | English
Clouds May 20, 2019

The evolution continues: serverless computing is changing application development for good.

Trends can be tricky. As soon as you understand the concept behind the current buzzword, everyone has already moved on to the next big thing. In times of transition, such as the one currently under way in the IT industry, it’s easy to lose sight of the big picture—and your sense of what’s truly important. One example of a new development keeping everyone on their toes is serverless computing. To keep you in the loop, we’re here to explain what it is.

Share article

The latest paradigm in software development, event-driven programming, is currently a hot topic in IT, fuelled by the increasing acceptance of agile working methods. But will the hype translate into a true game changer? One thing is certain: technology has never been closer to actual client behaviour, enabling rapid responses to new market requirements. And the potential for savings has rarely been more enticing, not only for developers but also for the clients who will be using their solutions further down the road.

 

Serverless computing is revolutionising software development.

Where are you most likely to identify potential savings? And what challenges do they harbour? Looking back over the past four or five years will give you an idea.  For a long time, there was no alternative to on-premise servers. Individual costly servers were purchased for various types of task, and they had to be configured and maintained. By the dozen, they populated expensively cooled server rooms and data centres, where they did their job more or less reliably. The related procurement and operating costs, not to mention administrative work, were enormous.

 

With the advent of cloud services, the traditional roles these servers played, such as file servers and network infrastructure, were gradually outsourced to external service providers. Infrastructure-as-a-Service (IaaS) was born. This service-based approach has since expanded consistently, giving rise to Platform-as-a-Service (PaaS) and Backend-as-a-Service (BaaS), and dramatically streamlining data centres in the process. It has also revolutionised the way applications are developed today. Lightweight virtualisation technologies such as containerisation make it easy to copy and reuse applications. Docker containers, for example, which are based on PaaS, hold the entire application, which you can then move from machine to machine without having to worry about operating systems or hardware.  As a result, developers have less administrative work.

 

Serverless computing, also known as Function-as-a-Service (FaaS), is the next step forward, made possible through the combination of agile development tools and cloud flexibility.

 

Two ways to tackle the same problem.

Both PaaS-based virtualisation (such as Docker containers) and the virtualisation of individual functions (FaaS/serverless computing) solve the same problem, namely to slash infrastructure overhead and thereby lower energy consumption and other costs. Individual application components become more scalable, and it becomes easier to recycle already functional code from other projects. This not only decreases redundancy, it also facilitates the development of future projects as existing code can be integrated with few or no changes at all.

 

Similar, yet markedly different.

As similar as they are, the two technologies differ fundamentally in certain respects, namely when to use them, how they work and how they are billed. PaaS solutions still require one instance to run round the clock in order to provide services. In a serverless/FaaS environment, however, instances of a function are triggered by an event. Once the function has been executed, the instance is shut down. This, of course, influences how the service is billed. FaaS users pay only for the time that individual functions spent actually accessing the server’s compute resources, which contrasts with the platform-driven “always-on” approach of PaaS solutions.

 

Another difference lies with how each technology is actually used. PaaS solutions allow users to create virtual containers using Docker, for example. These containers hold everything that an application needs to run, including all dependencies. The operating system itself, however, is excluded. Containers are saved as Docker images, which you then simply copy before installing and running on any other server. This process takes just seconds, so you can deploy new versions incredibly quickly. In theory, one image can start any number of containers, also called microservices. Scalability is limited only by the operating system of the rented server.

 

By contrast, developers using FaaS don’t have to first write an image before uploading it to the server. Functions are very simple, small constructs that can theoretically be used directly after the upload. This dramatically reduces the time needed for deployment to just a few milliseconds, compared with the seconds required by Docker, for example. And the programmer doesn’t have to worry about dependencies as management is taken care of entirely by the host. If the same function is accessed multiple times, this simply starts as many instances of that function as are needed. Similar to PaaS, FaaS theoretically allows an unlimited number of instances for a single function. If running these instances requires additional resources, the host adds them dynamically or moves the function to another, more powerful server. In this case, too, deployment takes just milliseconds and ideally the user doesn’t even notice. As a result, the application can be scaled virtually infinitely. What’s more, it is also highly reliable, since a function can simply be moved to another server if the original server goes down, making the application accessible once again.

 

The biggest difference between a function and a microservice created using Docker, for example, is the maximum execution time and server capacity available to individual functions. Because PaaS requires an always-on server instance, applications can theoretically run forever, or at least for a long time. FaaS, on the other hand, is billed according to the function’s execution time, making it less suitable for long-running operations. Each provider sets its own maximum execution-time limits for individual functions, usually between five and 15 minutes. The ideal FaaS function is therefore executed in just a few milliseconds and accessed only very rarely. System resources other than time are also strictly limited for each function. Often, only a few gigabytes of RAM or a few hundred megabytes of hard drive space are available for FaaS. If an application requires resources beyond this, PaaS or IaaS solutions are a better option.

 

How does serverless computing (FaaS) work?

Serverless computing connects seamlessly with existing cloud technologies. Its hosts simply integrate technology that developers using PaaS still have to implement manually. FaaS and PaaS are therefore quite similar, unsurprisingly. What we haven’t looked at yet, however, is the event-driven concept behind serverless computing. In event-driven programming, the application is based on a rather simple programming model: if a certain event happens, carry out a certain instruction. This is comparable to e-mail rules, which check whether an incoming e-mail meets certain conditions (e.g. “If the subject contains the word ‘information’”) and, if so, execute a function in response (e.g. “Move the e-mail to the ‘Information’ folder”).

 

Functions in Apache OpenWhisk, for example, consist of three components: triggers, actions and rules.

  • A trigger is a type of event which causes the function to respond (“The subject contains the word ‘information’”)
  • An action is an event handler containing functional logic and whose code is executed in response to the event (“Move the e-mail to the ‘Information’ folder”)
  • A rule links the trigger and action (“If the trigger is true, then carry out the action”)

This simple if-then construct already constitutes a function. Depending on the platform, developers can create it using a REST API in the command line or a web-based function designer. Various programming languages are available to do this—JavaScript, Java, Python and PHP are some common ones—allowing developers to stay within their familiar environment without forgoing any advantages. In addition, all of the platforms contain additional tools to support packaging, catalogue services and popular container-provision services.

 

The many benefits—and serious drawbacks—of serverless computing.

One of the benefits of serverless computing is how it is billed, namely on the basis of time used, since there is no need for an always-on server. This not only significantly reduces the costs of usage, but also those previously incurred by administrative tasks and procurement. In addition, many hosts let you get started for free, allowing you to quickly and easily create small applications that require only infrequent access. The rather flat learning curve, coupled with an extremely active community, make it easy to get a feel for this method. Moreover, event-driven development is very close to actual user behaviour as users trigger the function with an action. Developers can implement new user events quickly and respond promptly to any changes in user behaviour.

 

One particularity of serverless software development are so-called cold starts. Before a function can be executed, it requires a container that, similar to those by Docker, hold all dependencies and statistical files. If no container exists, for example because the function hasn’t been accessed for a while, the platform creates a new one. This process can take anywhere from a few milliseconds to several seconds, depending on the programming language used, the function’s code and the size of the dependencies. This is important to developers because it could result in irksome delays for the software’s users. However, if the function has been accessed recently, additional instances can be started at any time, practically instantly.

 

Certain aspects of serverless computing make it unsuitable for some cases. If developers expect to have long-running tasks that require additional resources, they should go with a different solution altogether. Another issue is that existing standard software and older proprietary developments are often time-consuming to port, if they can be ported at all. Breaking down a monolithic application into its individual components, organising them into microservices and functions before putting everything back together again is complicated and tedious. It’s usually more expedient just to reprogramme it using the new technology.

 

Pros and cons of serverless computing at a glance.

  

Pros

Cons

Usage-based billing optimises costs

Vendor lock-in prevents you from switching providers

(e.g. going from Azure Functions to AWS Lambda)

No specific infrastructure is required

Some tools take longer to learn how to use

Easy to move to other environments

Unsuitable for long-running tasks and frequently used applications

Can be scaled virtually infinitely

Debugging code can be trickier as there are no monitoring options

Focuses on business logic

 

Ultra-reliable and fault-tolerant

 

Event-driven

 

Speeds up time to market

 

 

The bottom line.

Serverless computing, also known as FaaS or event-driven computing, describes a nascent technology whose heyday is only just beginning. Despite its name, it doesn’t completely eliminate servers altogether. “Serverless” refers to the fact that developers and companies no longer have to deal with operating, maintaining or managing servers, as this is done by the cloud provider. Neither is it a major new breakthrough, but rather the result of systematic developments that have advanced existing cloud technologies into nearly all areas of IT. What is truly new is that instantly-available cloud resources are now linked with the agile development of distributed applications.

 

A broad range of companies—including Amazon (AWS Lambda), Google (Google Cloud Functions), Microsoft (Azure Functions) and IBM (public IBM Cloud)—are already dominating the market, setting the tone going forward. These current serverless technologies offer users a high degree of billing granularity, detailed insight and control, and affordable options for executing any and all application functions as needed.

jens-kaesbauer.png
Jens Käsbauer
Communications und Content Specialist