When it comes to managing containers and the cluster infrastructure they run on, what’s the right tool for you?
Containers have rapidly increased in popularity by making it easy to develop, promote and deploy code consistently across different environments. Containers are an abstraction at the application layer, wrapping up your code with necessary libraries, dependencies, and environment settings into a single executable package.
Containers are intended to simplify the deployment of code, but managing thousands of them is no simple task. When it comes to creating highly available deployments, scaling up and down according to load, checking container health and replacing unhealthy containers with new ones, exposing ports and load balancing – another tool is needed. This is where container orchestration comes in. Containers and microservices go hand in hand, significantly increasing the volume of individual services running in a typical environment compared to the number of monoliths running in a traditional environment. With this added complexity, container orchestration is a must for any realistic deployment at scale.
Aside from orchestration issues, another issue remains to be solved – where and how can containers be run? Additional tools are needed to run a cluster and manage cluster infrastructure. Fortunately, we have a few choices to fill this need.
Docker has become the defacto standard for creating containers. For orchestration and cluster management ECS, Docker Swarm and Kubernetes are three popular choices, each with their own pros and cons.
1. AWS Elastic Container Service (ECS)
One solution is to offload the work of cluster management to AWS through the use of Amazon’s Elastic Container Service (ECS). ECS is a good solution for organizations who are already familiar with Amazon Web Services. A cluster can be configured and deployed with just a few clicks, backed by EC2 instances you manage or by Fargate, a fully managed cluster service.
Pros: Terminology and underlying compute resources will be familiar to existing AWS users. Fast and easy to get started, easily scaled up and down to meet demand. Integrates well with other AWS services. One of the simplest ways to deploy highly available containers at scale for production workloads.
Cons: Proprietary solution. Vendor lock-in: containers are easily moved to other platforms, but configuration is specific to ECS. No access to cluster nodes in Fargate makes troubleshooting difficult. Not customizable and doesn’t work well for non-standard deployments.
Bottom Line: Fast and easy to use, especially for existing AWS users. Great option for small teams who don’t want to maintain their own cluster. But vendor lock-in and the inability to customize or extend the solution may be an issue for larger enterprises.
2. Docker Swarm
For those who are just getting started with Docker, Swarm mode is a quick, easy solution to many of the problems introduced by containers. Swarm extends the standard Docker command line utility with additional commands for managing clusters and nodes, for scaling services and for rolling updates. Service discovery, load balancing and more are all handled by the platform.
Pros: Great starting point for those who are new to Docker or for those who have used Docker Compose previously. No additional software to install, Swarm mode is built in to Docker. Simple and straightforward, great for smaller organizations or those who are just getting started with containers.
Cons: A relative newcomer, Swarm lacks advanced features and functionality of Kubernetes, such as built-in logging and monitoring tools. Likewise, overall adoption lags behind Kubernetes and proprietary offerings like ECS.
Bottom Line: Swarm is a good choice when starting out, it’s quick and easy to use and is built in to Docker, requiring no additional software, but you may find yourself quickly outgrowing its capabilities.
For advanced users, Kubernetes offers the most robust toolset for managing both clusters and the workloads run on them. One of the most popular open source projects on GitHub and backed by Google, Microsoft and others, Kubernetes is the most popular solution for deploying containers in production. The platform is well-documented and extensible, allowing organizations to customize it to fit their needs. Although it is fairly complex to set up, many managed solutions are available including EKS from AWS, GKE from GCP, AKS from Azure, PKS from Pivotal and now even Docker offers their own hosted Kubernetes Service.
Pros: Most popular and widely adopted tool in the space for large enterprise deployments. Backed by a large open source community and big tech companies. Flexible and extensible to work in any environment.
Cons: Complex to learn, difficult to set up, configure and maintain. Lacks compatibility with Docker Swarm and Compose CLI and manifests.
Bottom Line: For true enterprise-level cluster and container management, nothing beats Kubernetes. Although complex, ultimately that complexity translates into additional features that prove extremely valuable as your containerized workload begins to scale. As cloud vendors race to simplify things with managed k8s offerings, it will only get easier to deploy and maintain a cluster in Kubernetes.
Code-Level Monitoring with OverOps
Whether you use ECS, Docker Swarm or Kubernetes, orchestration and deployment is just the beginning in terms of challenges associated with containerized applications. With so many moving parts, it can be difficult to understand when something goes wrong, let alone where and why it went wrong.
Traditional monitoring tools are far from perfect even for traditional monolithic architectures, but when it comes to containerized applications their coverage gaps are much harder to overcome. The main challenge in monitoring containerized applications is in understanding the flow of a transaction as it passes through multiple containers to get to the real root cause of an issue.
Logs have always been dependent on developer foresight and are notoriously shallow when it comes to troubleshooting application issues. With microservices, logs are written and stored across multiple services making it even harder to follow the trail of breadcrumbs. APM tools, likewise, provide significant insight into resource consumption and transaction flow through the system but can’t reveal the individual line of code where an error occurred and state of variables at the time of the error.
OverOps is able to provide deep, code-level insights into your containerized applications including the full variable state at the time of an error. Our highly scalable, microservices friendly architecture is easily deployed in your ECS, Swarm or Kubernetes cluster.
For all the problems they solve, containers introduce new challenges that must be addressed in order for them to be used for real production deployments. As organizations continue to adopt containers, the need for tooling becomes more important than ever. Whether you’re offloading work to AWS, keeping it simple with Docker Swarm or going all-in with Kubernetes, code-level monitoring is critical to quickly identify and resolve issues.