Comprehensive software reviews to make better IT decisions
Containers and the “End” of Server Virtualization
Attention managers of virtual machine infrastructure. Containers are coming. For some they are already here. It is not the end but it is a new chapter. Be warned and be ready.
That’s the thing with infrastructure in a rapidly changing technology environment. Just when you think you’ve got a new thing nailed down and normalized in production, along comes a new, new thing and everybody is saying the formerly new thing is over.
Such is the case with processor virtualization, the sort of thing enabled by a hypervisor such as VMware vSphere or Microsoft Hyper-V. The way you got efficiency and resiliency for the growing sprawl of commodity servers in the datacenter was to virtualize those servers (as virtual machines or VMs) and consolidate them on fewer physical hosts.
We are coming to the end of that movement. Most organizations with large numbers of x86 (Windows and Linux) servers are today majority virtualized. Many tell Info-Tech they are 90% or more virtual.
But now comes this thing called the application container and the container host platform (the most well-known being Docker). Wrapping an application in a container is said to be more efficient and lightweight than wrapping it in a VM. Further, to host a bunch of applications in containers on a server you don’t even need a hypervisor.
Containers Are Virtualization
So is virtualization over? Far from it. Virtualization is just beginning. The mega trends we’ve seen in IT infrastructure over the past decade or more are continuing. These big trends include:
- Consolidation and Convergence. Distributed processing on Windows/Linux servers lead to sprawl. Consolidation and convergence is about reversing physical device sprawl. Bringing it all together in ever tighter clusters of high capacity processing and storage.
- Standardization and Commodification. The foundational layer of the consolidated and converged infrastructure is standardized grids or clusters of commodity hardware. The more hands off and wire-once this grid, the better.
- Abstraction (Software Defined). With unchanging hardware underneath, all the management and configuration action happens not in the hardware but in software. A hypervisor, for example, is an abstraction layer that lets you treat a single physical machine as if it were a bunch of separate machines (VM) each with its own operating system and applications installed.
A container is just another form of abstraction. Where a hypervisor divides up, or partitions, a single physical machine into multiple virtual machine, containers partition a single operating system into multiple instances. The abstraction is just at a different layer.
- With a hypervisor each virtual machine has an operating system that thinks it exclusively owns (rather than shares) a computer.
- With a container each containered application thinks it has exclusive ownership of an operating system although multiple containers can be hosted on a single OS.
OS abstraction has been around as long as machine partitioning into VMs. In the past VMs have had an advantage over containers in that they were more portable. As each VM had a complete OS installed it could be copied from host to host to host. This changed with advent of container platforms like Docker.
Docker extended the idea of a container to the concept of a “shipping container for code” that promised frictionless deployment and optimum portability. Now you can package up just the OS services that the application depends on and move the packaged container to another computer running the same operating system and the Docker platform.
Better Demarcation of Accountabilities
Proponents of containers over VMs will point out that containers are more lightweight than VMs as they do not contain the full operating system, only the bits necessary to make the application run on a given OS. This also means that infrastructure management and development will be able to function with less operational overlap.
In an ideal world, infrastructure operations would focus on the availability, capacity, and performance of a homogenous platform. Developers would focus on building and configuring the application. It would be a frictionless process in that there would be no need to establish requirements and approvals for a server (even a virtual one). When the application is ready it is simply moved to the appropriate host.
When an application is “wrapped” in a virtual machine, that machine has all the maintenance requirements (such as configuration and patching) of a physical machine. Overlap in the accountability for the maintenance of that VM is a source of friction (and possibly contention) in operations.
Long Live the VM!
That highly efficient virtual server infrastructure you have been building and tending this past decade is far from obsolete. For all the hype, containers remain an emerging technology choice and it is not an either/or decision.
In that ideal world, the infrastructure would be a homogenous grid of commodity servers. This is what cloud infrastructures look like. The real world of the corporate data center is more heterogeneous. VMs have moved from the next big thing to legacy investment.
VMs are also better at heterogeneity, where multiple OS versions are hosted. Containers are better suited to a single server type and OS. The current investment in virtualization also includes investment in mature management and governance tool sets for the infrastructure. A 2015 survey by StackEngine (later acquired by Oracle) found 49% of respondents listed their chief concerns with containers as security and operational tools maturity.
In order to protect and leverage current investment in virtualization while exploring the potential of containers, the near-term strategy is to host your emerging container infrastructure on virtual machines. Hosting a container on a VM may at first seem redundant and resource wasteful, but it is the best way to take advantage of containers while ensuring enterprise-level security, reliability, availability, and scalability.
Recommendations
- Get started with containers. Set as a strategic goal the creation of a container-ready infrastructure that will meet both the requirements of developers and apps managers and the availability, recoverability, and security requirements of the enterprise.
- Start with hosting containers on VMs. In the short term, the best solution is likely to be hosting containers on container-ready VMs running Linux and a container engine like Docker. This may not be optimal for performance but will be optimal for securing and assuring availability for the underlying infrastructure.
- Look to less hypervisor dependence in the future. Longer term, enterprises should pilot running containers on bare metal to become familiar with the emerging tools for managing containers. The future is likely a hybrid of virtualized infrastructure and bare-metal container infrastructure.
Bottom Line
Abstraction in the form of virtualization and software defined isn’t going anywhere. But one form of abstraction, the server hypervisor, has peaked in terms of market penetration and mainstream adoption. Future infrastructures will be 100% software defined but that doesn’t mean 100% of servers will need hypervisors. Your container strategy should focus on a hybrid future to bridge from legacy to new style virtualization.