Do you remember like 10 or 15 years ago that everyone was into Virtualisation, VMWare share price went to the roof and every company was migrating bare metal servers to VMware or other virtualisation vendors?
Well, IT has a history of cycles where similar technologies with a few tweaks here and there reemerges themselves from the ashes, usually in a better, cheaper and faster form.
That is what is happening with Containers and their flagship technologies (Docker and Kubernetes). Containers is like Virtualisation 2.0 and everyone now is talking about it. but what is a container then?
Containers became a core feature of the Linux Kernel some time ago, but they were still hard to use. Docker launched with the promise of making containers easy to use and developers quickly latched onto that idea. At the core of container technology are cGroups and namespaces.
Additionally, Docker uses union file systems for added benefits to the container development process. Control groups (cGroups) work by allowing the host to share and also limit the resources each process or container can consume. This is important for both, resource utilization and security, as it prevents denial-of-service attacks on the host’s hardware resources. Several containers can share CPU and memory while staying within the predefined constraints.
Using containers, developers are now able to have truly portable deployments.
Namespaces offer another form of isolation in the way of processes. Processes are limited to see only the process ID in the same namespace. Namespaces from other system processes, would not be accessible from a container process. For example, a network namespace would isolate access to the network interfaces and configuration, which allows the separation of network interfaces, routes, and firewall rules.
Using containers, developers are now able to have truly portable deployments. Containers that are deployed on a developer’s laptop are easily deployed on an in-house staging server. They are then easily transferred to the production server running in the cloud. This is because Docker builds containers up with build files that specify parent layers. One
advantage of this is that it becomes very easy to ensure OS, package, and application versions are the same across development, staging, and production environments.
Because all the dependencies are packaged into the layer, the same host server can have multiple containers running a variety of OS or package versions. Further, we can have various languages and frameworks on the same host server without the typical dependency clashes we would get in a Virtual Machine (VM) with a single operating system.
Compared to Virtual Machines, Containers are much faster and they run in user-space, you can have a container up and running in a matter of seconds compared to minutes that it would take for a Virtual Machine to boot. They are also much smaller in size, we talk about MBs when Virtual Machines size are usually into the GBs.
The well-defined isolation and layer filesystem, also make containers ideal for running systems with a very small footprint and domain-specific purposes. A streamlined deployment and release process means we can deploy quickly and often. As such, many companies have reduced their deployment time from weeks or months to days and hours in some cases. This development life cycle lends itself extremely well to small, targeted teams working on small chunks of a larger application.
Why the name container?
Do you know the containers that you see in the big transoceanic ships? The ones used to move stuff cheaply from China? OK think about those containers for a moment. The modern shipping industry only works as well as it does because we have standardised on a small set of shipping container sizes.
Before the advent of this standard, shipping anything in bulk was a complicated, laborious process. Imagine what a hassle it would be to move some open pallet with devices like laptops from a ship and onto a truck, for example. Instead of ships that specialise in bringing electronics, car parts, or whatever from Asia, we can just put them all into containers and know that those will fit on every container ship.
The promise behind software containers is essentially the same. Instead of shipping around a full operating system and your software (and maybe the software that your software depends on), you simply pack your code and its dependencies into a container that can then run anywhere — and because they are usually pretty small, you can pack lots of containers onto a single computer.
Containers simply make it easier for developers to know that their software will run, no matter where it is deployed.