VMs – The New Infrastructure Anachronism

Author: No Comments Share:

 

anaThe virtualization of resources has been fundamental at every stage of the development of computing.  Today, application developers don’t think about disk capacity, memory limits, or network bandwidth.  And they depend on resource virtualization for application isolation, privacy and security.  But as our appetite for computing has continued to grow, mapping virtualization onto point-in-time realities imposed by Moore’s Law and human abilities to provision and manage resources at scale has required the constant addition of new abstractions.

Ten years ago in enterprise IT the key constraint was human:  It was simpler to provision a single application per (relatively cheap) x86 server and to scale-out, than to deal with the administrative challenges of multi-application servers; moreover applications had a pesky habit of being OS version dependent.   Meanwhile Moore’s Law did its job, delivering vastly more capacity per device than the OS and a single application needed.   Result: lots of servers with low utilization.   The smart folk at VMware, followed by others (Xen, Microsoft Hyper-V, KVM) re-discovered hardware virtualization, using a hypervisor to permit a single server to host multiple Virtual Machines, each of which encapsulates and isolates an OS instance and its application(s) and enables it to execute unchanged against a virtualized hardware abstraction.

Fast-forward:  In today’s enterprise datacenters server (and storage and network) virtualization has delivered far more than utilization gains:  Computing resources can be dynamically delivered to application instances based on business needs; relocation of running VMs delivers high-availability; and IT can quickly respond to demands for new capacity.  Most importantly, hypervisor based virtualization laid the foundation for Infrastructure as a Service cloud computing – a transformation that eliminates the (human labor practice of) physical resource provisioning and enables consumption-based resource pricing and VM-based server multi-tenancy.

But the abstraction has once again broken, and this time VMs are part of the problem.   Moore’s Law never sleeps (why buy a server, or a router or switch?). Fast adoption of IaaS led to the DevOps movement, and mobile-fueled consumer SaaS (eg Netflix, Facebook), big-data, and the rise of Platform as a Service clouds that hide the very concept of an OS from the app developer increasingly make VMs an anachronism.   Applications adapt to infrastructure failures, and DevOps and PaaS frameworks can auto-scale application capacity across multiple servers – even in different data centers.  The (human) notion of running a VM on a server is irrelevant.  Moreover, public clouds increasingly thrive on OS homogeneity (eg: Ubuntu in AWS, or Windows Server in Azure) and having many copies of the OS on a single server (one in each VM) wastes memory, bloats storage and clogs the network.  Using a VM just to achieve the property of isolation, or to permit secure multi-tenancy is wasteful.  Applications can be better secured if the OS against which they run is minimized in size and optimized for the cloud – and only one copy of the OS is needed.

Crucially, the hardware-isolation technologies incorporated into CPUs to support server virtualization can now be used to deliver hardware-enforced multi-tenancy for applications, without the bulk of a full VM per application, using technologies such as micro-virtualization.   Today, the IaaS clouds rent you memory based on the size of your VM.   But how many copies of the OS do we really need on a server? Right: One.  The use of micro-virtualization in a cloud context will provide an extraordinarily light-weight capability for hardware multi-tenancy of applications – in micro-VMs.

Instead of booting a VM instance on a virtual server, micro-virtualization in the cloud will permit an application to just run, aganst a single bare-metal OS instance – in milliseconds – while benefiting from the hardware isolation offered by virtualization features in the server chipset.  In the Linux world, ZeroVM, built on NaCl offers a minimized bare-metal Linux that aims to offer secure isolation.  One could deliver a similar Windows capability fashioned on server core.

Instead of creating and managing VMs through their life-cycles (create the VM, patch the OS, install the app, boot, snapshot, suspend, resume, clone…) it will be easier to dynamically provision application instances into cloud-hosted micro-VMs using light-weight application containers such as Docker.  Why wait for a VM to boot when an app can instantly launch and run?  Docker application containers can also be moved on the fly – just as VMs before them – but there’s no need to lug around a whole OS with the app. Instead, the application container can be moved to a new micro-VM and the old one destroyed.  In the Azure world, expect an evolution of today’s application virtualization technologies, starting with an ability to move applications from Windows to Azure (FD). The recently announced .NET Foundation.could play a key role in future.

In both IaaS and PaaS clouds it is key to be able to efficiently run, relocate and automate application instances – for example to permit big-data queries to  execute mutually isolated on nodes that manage the data.  Hardware-isolation for application containers – using micro-virtualization – will do the rest.

There’s another good reason to use micro-virtualization in the cloud: Density.   For Bromium’s use case, micro-virtualization delivers about 100x density improvement by comparison with traditional fat VMs. On my Windows PC, hardware isolated applications in micro-VMs all share the Windows 7 OS, but execute copy-on-write.  I regularly have ~150 micro-VMs in under 4GB memory.   My guess is that if we were to re-price today’s IaaS cloud offerings at a penny on the dollar while continuing to deliver hardware-enforced multi-tenancy, enable apps to just “run”, and get IT out of OS and VM lifecycle management, adoption of cloud wouldn’t be an option, it would be an imperative.

When we look back on enterprise IT infrastructure in 10 years, VMs will still play an important role – in private/hosted-private clouds where humans manage traditional enterprise IT applications that are tied to legacy OS dependencies.  For as long as legacy infrastructure applications remain, VMs will remain a key infrastructure abstraction.     But real clouds, both public and private, will provide granular, agile, app-centric hardware-enforced multi-tenancy with vastly superior resource utilization and availability – without the explicit need to expose the concept of a “virtual machine”.

Next post: VMs are an anachronism in end-user computing too!

Previous Article

VMs – The New Infrastructure Anachronism (2/2) – End User Computing

Next Article

YouTube has fallen

You may also like