There’s been a keen interest in Container technology and it’s interesting because containers have in fact been around for years. Overshadowed by virtualization for the last decade, containers are making a comeback with a ‘run-anywhere on any-platform guarantee’. Does that sound like the promise of Virtualization? Probably, but there are some key differentiators and they matter; a lot. But before we get to that, let’s look at back to review the evolution.
Years ago, when an application needed to be deployed, new hardware was ordered and in weeks or often months, the application would be installed on the new server, and housed in the data centre. It was a lengthy, laborious and expensive proposition. Putting aside the time delays, once live, servers require power and cooling and people to manage them. Sometimes, a lot of people. Over time, servers became increasingly robust and the one-app-to-one-server ratio meant that majority of time the server sat idle, waiting for a compute request. Not the best use of resources.
Virtualization delivered the ability to run multiple virtual servers on a single piece of hardware thereby increasing utilization. Using a Hypervisor, organizations could virtualize the hardware and slice it up to multiple, isolated Operating Systems (OS) each running within their own Virtual Machine (VM). This changed everything. Data Centers could consolidate the hardware footprint or increase compute density, simply by virtualizing hardware. Physical servers would be converted to virtual (P2V’d) and Applications would continue to run without users ever knowing the difference. Fewer servers to purchase and manage, significantly lower power and cooling costs, and seamless user experience. Can it get any better than this? Yes it can.
With containers, instead of virtualizing the underlying hardware, only a single OS is shared. Containers sit on top of a physical server and a host OS (typically Linux or Windows). Each container shares that host OS kernel and, usually, its binaries and libraries. Shared components are read-only and sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code. What does that mean? It means that a physical server can run multiple workloads within a single operating system installation.
One OS to patch and maintain instead of multiple operating systems minimizes system admin time and reduces vulnerabilities. Containers are lightweight, typically only megabytes in size and take just seconds to start. Compared to containers, VMs take minutes to run and each one is typically measured in gigabytes. You can run far more containers on a physical server than you could VMs.
Another enormous benefit derived from Containers is portability. Cloud-first strategies, and corporate initiatives have been driving public cloud adoption in recent years, and companies are now struggling with soaring costs and vendor lock-in. Containers are portable and can easily be moved into or out of any environment.
Are you running many instances of the same operating system? Then containers are likely a great fit for your organization and they just might save you significant time and money over VMs.