Serverless and containers are often viewed as competing development technologies. But when integrated, they can be a powerful combination.

An enterprise cloud architect (let’s call him Jack, not his real name) from a large and well-known finance company reached out to us about a dilemma he was having.

Jack was responsible for his company’s cloud architecture strategy and was weighing input from various teams in his organization. The company wanted to find the right balance between:

  • Minimizing its dependence on a single cloud service provider (and hence minimizing disruption of applications if a migration to another CSP were necessary)
  • Increasing speed of application development
  • Getting as much out of a cloud service provider as possible without being locked in

As you can see, tradeoffs certainly need to be made here.

Jack and his team were comparing containers and serverless architectures to see how each of these technologies could help his company.

When Jack reached out to us, we started thinking about the pros and cons of each of these technologies, how they might impact what his company was looking for, and what options were the best fit.

Here’s what we came up with.

 

Containers vs Serverless blog image

Serverless

We’ve written extensively about serverless in this blog post, so we’ll just briefly review what serverless is and its pros and cons here.

What is serverless?

Serverless is a development approach that replaces long-running virtual machines with compute power that comes into existence on demand and disappears immediately after use.

Despite the name, there certainly are servers involved in running your application. It’s just that your cloud service provider, whether it’s AWS, Azure, or Google Cloud Platform, manages these servers, and they’re not always running.

Rather, you configure events, such as API requests or file uploads, that trigger your serverless function to execute. And when that action is complete, the server goes idle until another action is requested, and you are not billed for the idle time.

Pros of serverless

The first pro is that you only pay for time when the server is executing the action. Like mentioned earlier, the server only runs when an event triggers it to, so you’ll only pay for the short time when the server is active. Save some bucks!

Additionally, serverless allows your app to be elastic. It can automatically scale up to accommodate many concurrent users and scale back down when traffic subsides. This characteristic increases the performance of your app while saving you money.

You’ll also spend much less time managing servers. You don’t need to worry about provisioning infrastructure, managing capacity, or avoiding downtime – your cloud service provider does all that.

As such, serverless architectures help you reduce development time and get your products to market faster. If you don’t need to focus on your infrastructure, you can spend your time on building the best product possible. This is one of the key benefits that Jack and his team saw in serverless.

Serverless also fits really well with microservices, where developers build smaller, more loosely-coupled parts of the software whole. Serverless alleviates the need for these developers to spin up their own server instances and allows them to build these microservices much more quickly.

And because serverless functions don’t need a long-term hosting location, you don’t need to assign them to specific cloud servers and aren’t limited to particular availability zones. This essentially makes them highly available.

Cons of serverless

Because servers sit cold until they’re pinged by an application, there is some latency involved in executing tasks. Thus, serverless may not be an ideal solution for applications where speed is paramount, such as e-commerce and search sites.

Vendor lock-in is a concern as well. If you use the serverless offering of your cloud service provider (e.g. AWS Lambda, Azure Functions, or Google Cloud Functions) and you decide to migrate to another CSP, you’ll likely have to make major changes to your code base. And this can take a lot of time and money. This was a major drawback in the eyes of Jack and his team, as vendor lock-in was a primary concern.

To deal with this, you can use frameworks such as the Serverless Framework or Spring Cloud Function, which abstract serverless away from the CSP and allow you to jump from one provider to another more easily. But you’ll certainly lose some functionality along the way.

Due to the event-based nature of serverless, it’s not the best choice for long-running apps. Online games and apps that perform analysis of very large datasets won’t be a good fit for a serverless architecture, as serverless functions have time limits (typically five minutes) before they are terminated.

Another con of serverless is that you don’t have much control over the server. Usually you can select the amount of memory your function gets, but then your CSP assigns a small amount of disk storage and decides the rest of the specifications for you. This can be a hindrance if you need something like a GPU to process large image or video files.

Finally, complex apps can be hard to build using a serverless architecture. You’ll have to perform a lot of coordination and manage dependencies between all of the serverless functions, which can be a tough job for large, complicated applications.

As you can see, serverless has its benefits and drawbacks. Let’s now see how containers stack up.

Containers

What is a container?

According to Docker, a container is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, and settings.

Containers solve the problem of running software when it has been moved from one computing environment by essentially isolating it from its environment. For instance, containers allow you to move software from development to staging and from staging to production, and have it run reliably regardless of the differences of all the environments.

 

Container architecture

Container architecture – image courtesy of Google Cloud

 

Pros of containers

The first benefit of containers is their portability. The main draw of a container is that you can combine the application and all of its dependencies into a neat little package and run it anywhere. This provides an unprecedented level of flexibility and portability, and allows you to stay cloud vendor-agnostic.

The next pro, especially compared to serverless architectures, is that you have full control over your application.

You are the master of your domain. You can lord over individual containers, the entire container ecosystem, and the servers they run on. You can manage all the resources, set all of the policies, oversee security, and determine how your application is deployed and behaves. You can debug and monitor as you please. This isn’t the case for serverless; rather, all of that is managed by your CSP.

Container-based applications can also be as large and complex as you need them to be, as there are no memory or time limitations like there are with serverless.

Cons of containers

The first con is that containers take much more work to set up and manage.

Every time you make a change to your codebase, you’ll need to package the container and ensure all of the containers communicate with each other properly before deploying into production. You’ll also need to keep containers’ operating systems up to date with frequent security fixes and other patches. And you have to figure out which containers go on which servers. All of this can slow down your development process.

Because containers need a long-running hosting location, they are more expensive to run than serverless functions. With serverless, you only pay when servers execute your function. With containers, you have to pay for server usage even if they’re sitting idle.

Containers face some scaling issues as well.

The first issue is with monitoring. As an application grows, more and more containers are added. And these containers are highly dispersed, scattered, and constantly changing, thus making monitoring a nightmare.

It’s also difficult for data and storage to scale with the increasing number of containers.

First, there is no data persistence when containers are rescheduled or destroyed, so data is often lost when changes are made. Next, with the distributed nature of containers, it’s difficult to move data between different locations or cloud service providers. Storage also doesn’t scale well with apps, which lead to unpredictable performance issues.

When should you use each?

Containers are best used for complex, long-running applications where you need a high level of control over your environment and you have the resources to set up and maintain the application.

Containers are also very useful in migrating monolithic legacy applications to the cloud. You can split up these apps into containerized microservices and orchestrate them using a tool like Kubernetes or Docker Swarm.

Containers are ideal for an app like a large e-commerce website. A site like this contains (pun intended) many parts, such as product listings, payment processing, inventory management, and many more. You can use containers to package each of these services without having to worry about time limits or memory issues.

Serverless is best applied for apps that need to be ready to perform tasks but don’t always need to be running. For instance, serverless is a great choice for an Internet of Things (IoT) application that detects the presence of water to identify a leak in a water storage facility. The app doesn’t have to run all the time, but it needs to be ready to act in the case of a leak.

Serverless is ideal when development speed and cost minimization is paramount and if you don’t want to manage scaling concerns.

Can serverless and containers work together?

No doubt!

Serverless and containers have strengths that can compliment the other’s weaknesses, and incorporating these two technologies together can be highly beneficial.

You can build a large, complex application with a container-based microservices architecture. But the application can hand off some back-end tasks, such as data transfer, file backups, and alert triggering, to serverless functions. This would save money and could increase the application’s performance.

AWS Fargate is a tool that is sort of a hybrid between containers and serverless that can help alleviate some of the issues that each of these technologies presents.

Fargate is a compute engine for Amazon Elastic Container Service (ECS) and Elastic Container Service for Kubernetes (EKS) that lets you run containers without having to manage servers or clusters. You don’t have to provision, configure, or scale virtual servers to run containers. Thus, Fargate combines the portability of containers with the elasticity and ease of use of serverless.

 

How Fargate works

How Fargate works – image courtesy of AWS

 

One of Jack’s primary concerns with serverless is the lack of portability. At its core, Fargate is a container service (with serverless attributes), so portability isn’t a concern. Fargate also works with Kubernetes via EKS and thus is very portable to other cloud platforms such as Azure (Azure Kubernetes Service) or Google Cloud Platform (Google Kubernetes Engine).

Additional concerns with serverless include the five-minute maximum runtime and memory limitations. In order to get around this, Fargate can be used to create what our VP of Software Development Dan Rusk calls a “Fat Lambda”.

To create a Fat Lambda, you:

  • Wrap your code in a container
  • Deploy that container to Elastic Container Repository (ECR)
  • Run that container in a task in Fargate (which can either be scheduled with CloudWatch or triggered by a Lambda)
  • Eventually convert this to run in an ECS EC2 cluster, if necessary

When a task gets too big for Lambda, you can trigger the Fat Lambda to run. It’s a pretty cool workaround. Check out the video below for Dan’s Fat Lambda presentation at our Columbia AWS Meetup.

 

 

An issue with containers is the aforementioned scalability problem. Fargate allows you to scale without managing servers, which allows developers to focus on building product.

There are other tools that are facilitating the serverless + container combination. IronFunctions works with Containership to create a sort of a “serverless container orchestration.” Fn Project packages your serverless functions in containers that can run on any platform that supports Docker.

There’s a lot of activity around combining serverless and containers because there are so many benefits that can come out of this powerful marriage.

Back to Jack

After a bit of back and forth discussion and consultation, Jack got back to us on the status of his decision. They came to the conclusion that serverless and containers do not compete with each other but rather can be used in ways to complement each other.

Also, Jack admitted that the debates were sometimes more political than technical. Some team members were just fans of one technology over the other.

Jack’s team has started implementing Docker using Docker Swarm as their orchestration engine, with plans to move over to Kubernetes in the near future.

He continues to research ways to incorporate serverless into his company’s applications over the next few months.

Conclusion

Containers and serverless are two increasingly popular development technologies that seem to be constantly compared to and competing with each other.

This was certainly the case with our friend Jack, who needed to come up with a cloud strategy for his enterprise.

Jack learned that serverless and containers aren’t mutually exclusive and can be integrated to complement each other’s strengths and weaknesses.

Are you debating whether containers or serverless is right for your applications? Have you considered integrating both of them? We’d love to hear your thoughts in the comments.