Crossing the Hellespont

Author: No Comments Share:

We are engaged in a grand quest to discover the holy grail of the desktop.  We’ve learned key lessons from the Trojan War:  My desktop must support the apps I use today and in future, seamlessly manage dynamic notions of trust, protect my data at rest and runtime, and tirelessly defend me and those that trust me, even when I make mistakes or enter a zone of unfathomable trust.

It’s time to leave Troy and head across the Hellespont toward Byzantium – a task of daunting complexity for any army.   Complexity is a key contributor to a lack of security.  Every enterprise has a large number of PCs, each of which at any time has different apps, configurations, patches and so on.  Only one of them needs to be compromised for the entire organization to fail.  Achilles had fatal vulnerability: his heels. Others in the Greek army had ankle guards.  Why didn’t he? If all PCs have the same golden image, with the latest patches, surely we can be secure?

Anyone that has tried to build a trustworthy system, like XenClient, knows the importance of a tiny TCB – fewer lines of code is better (a key reason for the continued success of Xen).   But a secure hypervisor isn’t going to help users who rely on Windows and its millions of applications.  So let’s get one thing clear: re-writing the OS, expecting users to adopt a new OS or learn a new virtual user experience (such as tabbing between a corporate and personal VM), or making IT learn about complex new technologies and new management practices and tools is a non-starter.  New means “complex” which leads to mistakes … and compromise.

Today, the job of continually improving the security of the desktop falls on the courageous shoulders of the desktop management team. They and their management tools must deal with the enormous complexity of user mobility, the diversity of clients, OS versions and their configurations, requirements for application compatibility, diverse software distribution, acquisition and licensing schemes, and a large number of enterprise specific configurations.  The task is made harder still by the frequency with which critical patches need to be distributed.

A CISO who runs 160,000 PCs told me it takes about a month to roll out a simple patch to all users. Typical success rates are under 90% because systems may be mobile, offline, or on insecure networks. Complex patches that affect apps require every PC to be re-imaged – a herculean task.  No matter how proactive the team is, there are always PCs that have not yet been patched yet (that are therefore vulnerable), and there are always users whose systems have to be re-imaged to get rid of malware.   An organization of this size can spend over $1M per year – just to keep up.

The complexity of patch management has been seized upon by proponents of VDI (both hosted and client-based), who claim the following: Courtesy of centralized golden image creation, block-synced image delivery to every endpoint, and the consistent virtual hardware offered by the hypervisor, all virtual desktops can boot from the (same) latest golden image.  The specific needs of each user can be met (the vendors claim) by layering the golden image with user specific personalization. There’s only one image to patch for all users, who boot “a new PC every day”. BigFix (Now IBM Tivoli EP Manager), VMware View, Citrix XenDesktop, Moka5 and Virtual Computer are vendors in this category.  With its acquisition of VCI Citrix can now offer the same capabilities for XenClient.   VMware, with its acquisition of Wanova, will also attempt to deliver these benefits to natively running systems.

Unfortunately there’s a fly in the ointment: Desktops need to be personalized. They are also highly stateful. Building the desktop on-the-fly at boot time from the golden image, with native, virtualized and user-installed apps, together with personal, corporate, departmental and user-specific profiles and data remains a significant technical challenge. Windows was not designed to be ripped apart in this way.  Moreover, more layers of virtualization means more layers of management complexity, which is the last thing desktop IT needs.  Is the improvement in image management worth the additional complexity? Is the mythical golden image that runs everywhere but that is arbitrarily personalizable even attainable?  Jason and his band of Argonauts certainly believed in the golden fleece, but it took them a decade to find it. Clearly some CIOs believe so, but it is fair to say that adoption is in its earliest stages.   The center of the debate is, and Brian’s new book dissects the technologies and their limitations in great detail.  In my view the two technologies with the greatest potential to simplify PC-CLM are app virtualization and user virtualization, but even these are in their earliest stages of adoption.

Let’s assume for the sake of argument that the vendors in the “golden image management” category all succeed and deliver fantastic solutions (at zero marginal cost).  Are we substantially better off?

But even if we assume that all of the promises made by the vendors in this category are wildly successful, better image management fails to make the desktop any more secure, and while process improvements are always be useful, they don’t help us to achieve our goal of trustworthy computing.

In summary: The Desktop Virtualization vendors over-state the technical abilities of their solutions, and over-promise the benefits of better image management.  All impose substantial additional management complexity on IT, and mandate a virtualized user experience that leaves a lot to be desired.  Finally, their promises of greater security are groundless.  Caveat emptor.

Previous Article

Into Battle with the Byzantine Generals

Next Article

Timeo Danaos et Dona Ferentes!

You may also like

Leave a Reply

Your email address will not be published. Required fields are marked *