Since AWS launched AWS Lambda in 2014, we, Cloudway, have been using serverless components to build architectures. Since 2015, we have been using a serverless-unless or serverless-first approach and since then, we have delivered many applications built around serverless architectural components. In recent years, however, we have seen the hype increase and with that hype, the definition of serverless has also become broader. As a result, the definition has become less clear. In this blog post, I will go through the different perspectives and definitions to eventually reach our Cloudway definition and the reason why we have chosen this definition as a basis.
To better understand the definition of serverless, we first need to follow the path many companies took in their cloud adoption. Many companies have started at Infrastructure as a Service (IaaS), Container as a Service (CaaS) or Platform as a Service (PaaS). Some have also already experimented with Functions as a Service or have built entire application architectures using these Serverless Functions.
To explain the difference between these different paradigms, we start from below picture.
For Infrastructure as a Service, you need to manage all the operating system dependencies, your runtime and application needs. You also need to set up the network between the virtual machines (VM) and the load balancing between different data centers (cloud provider) to create a highly available system.
Container as a Service does not require you to manage all OS dependencies of the VM. Instead, the cloud provider provides an OS environment in which you can run your containers. These containers contain all the dependencies needed for the runtime and the application.
For Platform as a Service, the runtime is provided by the cloud provider. They will install all the OS dependencies needed for the delivered version of the runtime and you can focus on building the application that runs at the chosen runtime. (Such a runtime can be IIS for C#, Tomcat for Java or Express for Node.JS).
For Function as a Service, the included runtime is not a web application server, but the runtime needed to execute the code in the programming language of your choice (Java, Node.JS, C# ...). The difference with PaaS is that you do not have to configure the scalability. Instead, the cloud provider will handle the scaling and ensure that the application is available (made available) as soon as a particular event occurs. An event can be a CRON, an HTTP request, an object written to the storage, etc. This also means that the application will scale to zero when not in use and that the billing of your application is based on the number of milliseconds your customers are interacting with your application.
As the main influence in the serverless space, we will use AWS as the next step in explaining the definition of serverless. AWS sees serverless as a spectrum that ranges from more operational effort to less operational effort. Therefore, we can say that AWS Lambda (FaaS) is more serverless than Elastic Beanstalk (PaaS) which is more serverless than EC2 (IaaS).
This is because much of the operational effort for AWS Lambda is done by AWS. They take care of the underlaying scaling, networking and messaging and they provide easy integration and connections to other services such as Cloudwatch for logging and monitoring. As a developer, all we have to do is provide our function code and define when this code has to be executed.
The same application can also be written using AWS Elastic Beanstalk, but then we are still responsible for configuring the scalability of the application, choosing the right instance types and sizes, the network layer, and so on. This is also what we see in the previous picture.
Most of the times when you hear people talking about serverless, they are talking about compute like AWS Lambda, Azure Functions or Google Cloud Functions. However, this operational spectrum also exists in the database, storage, messaging and analytics sections of an IT architecture. In above improved visual, you can see that at the database level, for example, you also have the choice of how serverless you want to go and thus the amount of operational effort you choose.
In recent years, Kubernetes/Openshift and Docker have developed into a (technical) solution to solve business problems when there was a need for custom developed solutions. Although this is a great technical stack, as with all solutions, it has its challenges.
The Kubernetes movement is also currently committed to serverless solutions by providing ways to develop Functions as a Service based on container images.
From a developer's perspective, this Kubernetes-based FaaS can be seen as the most serverless solution. As serverless as AWS Lambda. However, from the perspective of a company or a CTO, this solution requires more operational effort than the AWS Lambda solution. Therefore, in addition to the spectrum that goes beyond mere calculation, the definition of serverless is also based on the perspective from which we look at the spectrum.
A final point to note is that the large public cloud providers now also have serverless container solutions. If we look at the operational effort, it comes very close to the FaaS solutions with the only difference that you run a container instead of a piece of code/function.
At Cloudway we try to help our customers with more than just writing code based on their functional requirements. We go the extra mile to pro-actively think about our customers' business and the problems they are trying to solve. That's why we look at the serverless spectrum from a business perspective.
The above reasoning and because serverless is more than just compute, we at Cloudway see serverless as an approach to designing a solution architecture. We base this solution architecture on the state of the business, the business case we are trying to solve, and how quickly the solution should be built.
With these 3 parameters in mind, we will choose both short-term and long-term goals on the spectrum. Short-term and long-term objectives may be different based on the business case and time critique. If we need a solution within a few months and it is not certain what the use of the new business case will be (with prototyping and experimenting in mind), then we can initially opt for an easy to set up serverless solution that can be rehosted on a container-based solution in a second phase.
This also means that we do not see containers versus serverless as a choice you have to make. Companies will eventually use the benefits of both technical solutions to build their application landscape.
Often serverless gets the label of yet another technical solution for a business need. However, some business people, from project managers to the CTO, see all the business value that this approach can create. This strengthens the advantage of their business over their competitors.
Therefore, based on customer stories such as Philips', we will provide insight into the benefits we have seen in opting for a serverless approach. We will feature this in our next blog posts, so follow us on LinkedIn to make sure you don't miss any benefits!