Yesterday we announced the integration between Bromium LAVA and the Palo Alto Networks security platform. This is a perfect example of how Client SDN can transform enterprise networks into an agile, responsive, secure environment – and as profoundly important as server-side SDN. The enterprise will own only part of its cloud, but the vast majority of any enterprise network will remain end-user facing.
Thus far I have described how the Client SDN – a client-side analog of the cloud-side SDN that runs the network services used by micro-VMs on a micro-virtualized end point – dramatically enhances both protection and privacy by ensuring that every hardware-isolated task is entirely independent of all other tasks, and the desktop itself, in terms of its access to network(s) and sites. Untrusted sites/docs/apps cannot gain access to the Intranet or to SaaS sites of value. Tasks accessing high value sites can only communicate with those high value sites (and, if desired, a clique of their trusted partners) but have no access to the Internet at large, or to the Intranet. Intranet applications can be restricted to only ever have access to the Intranet – preventing data leakage. And no site/doc or detachable/mountable storage need ever be trusted.
In this post I want to show how Client SDN enables the whole enterprise network to become agile –automatically re-configuring the fabric in real-time to block C&C servers and interdict malware in response to new, targeted attacks – for which traditional signatures may never arrive.
Micro-virtualization is protection-centric. It makes an end point vastly more secure by relying on the CPU to do the hard job of isolation. This, in turn, transforms the enterprise’s ability to respond: Micro-VM Introspection permits the Microvisor wait until malware actually attacks the hardware isolated task, because the system is protected. This eliminates false alarms. Moreover the Microvisor maintains a hidden task “black box recorder” that records the entire kill chain and the malware itself, every DNS query, every IP flow between the task and 3rd party sites/servers, as well as the malware payload. Every micro-virtualized end point therefore becomes a sensor that can deliver precise, real-time forensic detail for each attack that executes in a hardware isolated micro-VM.
Precise, real-time alerts from each attacked end point can be immediately delivered to the SOC. Crucially, by adopting an open format such as STIX, these alerts can be immediately used to re-configure the network infrastructure to respond to an attack: C&C IP addresses can be immediately blocked at the firewall. In the presence of a next-gen network protection infrastructure, the malware fingerprint and origin site (IP or URL) can be used to prevent other users from falling prey to the same attack. My point here is that it is possible to entirely automate the enterprise network’s response to malware that is directed at it – even malware uniquely fashioned for it, for which signatures may never be available.
The future of the enterprise network is agile – as agile for end users as it is for the data center. Client SDN is a crucial building block for a defensible enterprise network. It keeps bad stuff outside the network, and when an endpoint is attacked it contributes, in real time, precise detail that allows smart network infrastructure elements to dynamically react, to protect the enterprise as a whole.
In my previous post I drew an analogy between the requirement for hardware-enforced multi-tenancy (for compute, storage and networking) in the cloud, and the need for granular, hardware enforced multi-tenancy on the client to enforce least-privilege, ensure privacy, and to guarantee end-to-end isolation and security between the client-executed component of an application (eg: each client-rendered page of salesforce.com) and the SaaS back-end of Salesforce itself. The Microvisor hardware-isolates each task (site, app, doc) in a micro-VM, and the Client SDN virtualizes and isolates all network services for each micro-VM.
In this post I want to dig more deeply into the functional components of the Client SDN, with the goal of highlighting a fundamental difference between micro-virtualization and any other isolation technology such as a sandbox (sometimes misleadingly called a “virtual container”):
The Client SDN isolates, virtualizes and controls all network services for each micro-VM. By contrast, in a sandboxed application environment (eg: Chrome) there can be no Client SDN, because the network stack runs in the OS kernel – over which the application sandbox has limited or no control.
Why is this important? Imagine I plug my PC into the physical LAN in the enterprise data center, and browse to facebook.com – first using a micro-virtualized PC, and second, using a PC with only a sandboxed browser.
- Micro-virtualized: The browser tab for Facebook will be instantly and invisibly hardware isolated in a micro-VM, and by the rules of least-privilege it will be granted access to only a single file – the cookie for facebook.com. It has no access to any device hardware (NICs, disks, webcam, USB, detachable storage etc) or any other files. What of its network stack? Least privilege demands that (the browser tab in the micro-VM for) facebook.com
- Never be allowed to find or query the enterprise DNS, or access any Intranet sites,
- Never be allowed to resolve or access any high-value enterprise cloud sites, such as salesforce.com or aws.amazon.com
- Never be able to resolve or access my (the user’s) high value sites – such as my bank
- Never be able to find or communicate with any other application or micro-VM on my PC, any devices on my LAN (including printers), or any other enterprise application or infrastructure service or component – for example the proxy, routers, switches or security widgets, networked file-shares etc.
- The entire run-time of the micro-VM must be discarded the moment I browse to a different site or close the tab – discarding all network state.
These networking requirements of least privilege mean that the browser tab for facebook.com effectively needs to run “outside” the enterprise network – logically in the DMZ – even though it is in fact on my PC in the data center. The Client SDN has the job of ensuring that this logical isolation is implemented in practice. If a bad actor compromises the micro-VM, he cannot access the enterprise network or any high value SaaS sites. All that is available is the untrusted Internet (and the only file that could be stolen is the cookie for facebook.com).
- Sandboxed: The DNS query from my PC for www.facebook.com is resolved by the corporate DNS, and the HTTP query runs over the corporate LAN, via the proxy, to the Internet. All’s well until a bad actor shows up, compromising the browser, and perhaps even escaping the sandbox.
- If malware compromises the browser, but is contained within the sandbox, it can still gain control over other browser-hosted applications – tabs logged on to salesforce.com, to my bank, or an Intranet application. It can see keystrokes and steal credentials as I log into sites, and access any web service on the Intranet – including dropping malware in a Sharepoint site. It can steal cookies for all sites, enabling the bad actor to impersonate me on any site. It has access to any browser-cached DNS entries – potentially for valuable sites. Finally, it could probably steal information from the clipboard, and deliver malicious content to other enterprise services such as (web accessible) printers.
- If malware escapes from the sandbox (via an OS or sandbox flaw) it can arbitrarily query the enterprise DNS, discover networked devices on the LAN, access printers and any other networked infrastructure services (eg: printers, shares) and any applications on the Intranet or the Internet. It can use enterprise network resources to its heart’s content because they are offered by the OS abstractions.
The Client SDN isolates and virtualizes all network services for each micro-VM. The example above highlights its value for micro-VMs that isolate untrusted Internet sites, such as Facebook. For sites that I value, its role is to enforce privacy, isolation and security end-to-end from the client to the cloud.
In my next post, I’ll show how the Client SDN isolates, secures and protects network services for high value tasks: Intranet sites, enterprise SaaS sites, or user-valued SaaS applications.
You may have guessed that I’m laying the foundation for what Bromium will show at the RSA Conference. If you’d like to meet Bromium at RSAC, please drop me a note.
You’re surely familiar with the infrastructure revolution promised by Software Defined Networking. Of course, anyone who’s built a public cloud has been doing SDN since day one, because the network – like the virtualized server infrastructure – must enforce isolation, security and privacy per tenant or hosted app. In the enterprise, VMware’s private cloud SDN aspirations (AKA: automate your CCIEs) face a longer journey, but the traditional big iron vendors are in trouble: Cisco’s Insieme (an oddly) hardware-based SDN play looks great – as long as you run an all Cisco network.
SDN is a consequence of Moore’s law, virtualization and a trend toward DevOps that together mandate that the network be “programmable”:
- The legacy app-per-VM model needs the network (including the last hop vSwitch) to satisfy per-VM networking requirements (ACLs, bandwidth needs etc) wherever an instance happens to be dynamically (re-)located.
- Web apps need the cloud fabric to dynamically adjust as application tiers scale and adapt to failures.
SDN extends virtualization to embrace and enforce the requirement for multi-tenancy in the (cloud) data center, dynamically delivering app/instance specific network isolation & resources. This, together with the cost advantages of running the network stack on standard compute nodes with lots of I/O are up-ending the business of legacy custom-silicon network vendors. Software and commodity hardware are indeed eating the world.
You’re thinking: “Got it …what’s new?” Read on: Courtesy of micro-virtualization Client SDNs will turn the other half of the enterprise network on its head too.
The future of computing is about clouds & clients. The enterprise will have a private cloud. It will also consume SaaS and IaaS services from various providers. Many users will be be mobile – needing application access from mobile devices and laptops/PCs, from untrusted non-enterprise networks, and perhaps even from untrusted BYO devices. It is unavoidable that the enterprise will only control a small fraction of its network connectivity. Client SDNs will transform the networks that serve end users in much the same way that cloud SDNs will transform the future of datacenters.
In the days of client-server computing PCs were directly attached to the enterprise network. To “protect” them IT deployed all sorts of widgets – the Proxy, Firewall, IDS, IPS, and now network sandboxes. Each promised some new ability to block attacks. But attackers are agile, and enterprises are not. Today this legacy is a bit like a Maginot Line that can be easily bypassed in unexpected ways. These systems implement a mass of inflexible policy and configuration goop with as many holes punched through them as there are apps that users can’t do without.
Today’s enterprise “perimeter” is a myth. It’s indefensible and cannot be re-configured fast enough to detect/block attackers on a time scale that is relevant to the rate of attack evolution. Moreover, it is becoming less relevant because many of today’s users are mobile, off-net and want direct client-cloud access – at the very least for personal use.
But the ultimate problem is not the enterprise network – it’s the end point: Every end point runs hundreds, even thousands, of applications. Sure IT approves enterprise apps, but each Internet site, each file you download and each attachment you open is a different app – a different trust domain with specific privacy, isolation & security needs. Client devices are inherently multi-tenant: salesforce.com and a supply chain app co-exist in my browser. They are different apps, with mutually exclusive needs for access to data, compute and networking on the device. What’s needed is granular isolation and privacy on a per-app basis, for data, compute, memory and networking – on each device.
Inflexible, indefensible, single tenant networks fail in today’s cloud-client world as surely as does the notion that a single VM instance could enforce multi-tenancy between different customers / tenants of the cloud. Instead, we rely on the separation enforced by VMs and SDNs. We need to do the same on clients: If I can fool the user in one context and break an OS abstraction, I gain access to everything on the device.
Micro-virtualization on a client device offers the device equivalent of cloud multi-tenancy – enforced by hardware. It offers each application (eg: browser tab, or document) a defensible micro-perimeter by hardware-isolating its execution and by enforcing least-privilege access to the device, data and networks. When a micro-VM communicates with its cloud-hosted back-end service, it requires app-specific, granular, secure isolation. The client equivalent of a cloud SDN is granular, agile, app-specific network isolation per-micro-VM, at the heart of the client itself. This is the heart of the Client-SDN. In part 2: how the Client SDN will help the Maginot Line of legacy network bumps-in-the-wire to become agile and responsive to new attacks.
If you’re married you understand the need for compromise to build a successful relationship. But in this case I’m talking about something different – a marriage forged around the very idea of compromise – the kind of compromise that has shaken consumer and investor confidence in Target. The glittering marriage between FireEye and Mandiant is a pairing of two vendors with a common failing: Neither can protect customers from targeted malware. Instead, customers have to hope for the best, and when things go pear shaped, hire expensive experts to clean up after a successful compromise.
The good news is that there is a better way forward. We at Bromium know that it is possible to protect end points by design, that there is no need for a patient zero, that we can defeat attacks, eliminate remediation and deliver accurate forensic information, in real time, automatically and without spending a fortune.
Before I go any further, I want to state up-front that I have enormous respect for the team at Mandiant. I have read Richard Bejtlich’s superb book, and the APT1 report is testimony to the incredible investigative capabilities of the Mandiant team. Many Bromium customers have relied on Mandiant to get them back on their feet after an attack and there can be no doubt that they are in every respect a world-class outfit. FireEye delivers useful forensic intelligence, but its technology has fundamental limitations.
Serious infosec pundits have written thoughtful analyses of the acquisition and Mandiant’s billion dollar valuation; moreover the press is gushing with enthusiasm, and Wall Street is in love with the match (here is a counter-point). But every piece I have read fails to recognize that while the new FireEye has a powerful product and services portfolio it doesn’t solve the real problem: It cannot prevent a determined attacker from successfully compromising the enterprise, but it has a powerful story for how it can get you back on your feet, and stop the attackers next time around. I wonder if that’s good enough for Target?
Let’s dig into the portfolio of each company a bit, to illustrate my point:
- FireEye (FEYE) delivers a network appliance (the FireEye Threat Prevention Platform) that uses virtual machine images running on a hypervisor to detect and report on malware entering the enterprise network:
- “The core of the FireEye platform is the patented MVX engine, which provides dynamic, signature-less, and virtualized analysis of advanced cyber attacks. The MVX engine can be deployed across attack vectors and detonates suspicious files, Web pages, and email attachments within instrumented virtual machine environments to confirm a cyber attack. After confirming an attack, the MVX engine also dynamically generates threat intelligence about the indicators of compromise … in a standards-based format, which enables the intelligence to be correlated and shared …”
- Mandiant is primarily services based, selling consultants at rates as high as $500/hour to help enterprises investigate and remediate breaches and develop IR and SOC practices. In addition Mandiant has a relatively new product portfolio (competitors: Crowdstrike, CarbonBlack, Cylance) that relies on end point agents to discover and report Indicators of Compromise (IOCs) to a centralized management system.
- Mandiant for Security Operations: Uses IOCs to inform SOC teams about compromised end points: “… provides the complete picture required to find and scope attacks as they are unfolding. It searches for advanced attackers using Mandiant’s proprietary intelligence and also generates new Indicators from alerts triggered by network security solutions, log management solutions and SIEMs. These auto-generated Indicators analyze impacted endpoints, quickly find other devices affected by the incident and allow you to isolate and contain the compromised devices.”
- Mandiant for Intelligent Response (MIR): “..is an appliance-based solution that scales your experienced incident responders and forensics specialists to investigate thousands of endpoints and scope the impact of an incident. Are you compromised? How did the attacker get in? What systems are involved? Mandiant for Intelligent Response lets you answer these questions.”
- Mandiant Managed Defense is an appliance based system that continually reports on security status, and
- Mandiant Intelligence Center is a subscription based service that provides threat intelligence.
The acquisition makes a lot of sense to both companies:
- Revenue Growth: Mandiant is the industry’s premier brand in Incident Response, and it brings substantial revenue (about $100M / year) to newly public FireEye at a point when Wall Street will value revenue growth more than it will worry about the potential for weaker gross margins due to Mandiant’s historical dependence on services revenue.
- It addresses a FireEye product limitation by providing instrumentation, detection and response to end point attacks that elude detection by the FireEye network appliance. The Mandiant product enables FireEye to extend its visibility – to help to identify compromised end points.
- Mandiant also brings to FireEye an ability to quickly scale a tiered services business around the combined product portfolio, in synergy with its primarily direct-sales based business.
So what’s the problem?
- Neither FireEye nor its acquired Mandiant products prevent compromise of the end point. The FireEye appliance informs the SOC about attacks that it detects entering the enterprise. The Mandiant products inform the SOC about compromised end points, and assist with IR. But neither stops the attack. Many FireEye appliances that I have seen are configured to run legacy, unpatched end-point software, and report tons of false positives – a VM is successfully compromised, but the actual end points were not vulnerable to an attack because they were already patched (so I think of FireEye as selling a false sense of good security practice).
- Lots of malware that I see nowadays is FireEye aware – it specifically waits for end user input before it conducts its attack, to make sure that it is running on an end point. The Mandiant products don’t block attacks on the end point. The image below is an example LAVA trace of FireEye aware malware:
- To identify an attack, both the FireEye and Mandiant products rely on detection (and therefore some patient zero from which a signature can be created) to determine whether traffic entering the enterprise is malicious, or an end point has been compromised.
- If an attack is identified, there is no automatic remediation. Fortunately the Mandiant consultants will be available to clean up the mess and get you going again, but that requires expensive, skilled humans.
- If an end point is attacked and the attack is identified, neither FireEye nor Mandiant can automatically block the attack enterprise-wide. More humans are needed to turn the IOC into rules for the firewall, IDS or IPS, or even AV.
- From a sales perspective, the services-centric approach of Mandiant makes sense to the direct sales model of FireEye. But the company has poor appeal to the channel, and the services business will compete directly with services-centric VARs.
An Alternative: Protect-first, and deliver accurate Threat Intellgence – on a Budget
We at Bromium believe that there is no need for patient zero, that end points can protect themselves by design without third party signatures or IOCs, and automatically remediate themselves when attacked. We know that protected end points can deliver detailed, accurate forensic insights that would take a human expert days or weeks, in real-time. We also know how to turn these insights into automatic responses that block attacks enterprise wide. So the FireEye + Mandiant approach appears to be the polar opposite of the Bromium approach. They focus on expensive IR and remediation assuming compromise. Bromium takes a no compromise approach to security, and automates IR:
- Protect first, and protect always. The solution is not dependent on network based or IOC detection on the endpoint. It protects the end point by design, and because of that resiliency, prevents the customer from having to spend a lot of money on expensive remediation & Incident Response
- Automated forensics, not humans at $500/hr: Because there is no need for an “indication of compromise” (indeed no compromise, or patient zero) LAVA can rely on the resilient protection architecture of vSentry to automatically provide unrivalled detailed insight and forensic analysis of the attack, without expensive human-centric processes. Only by ensuring that attacks execute in an isolated environment on a vSentry protected end-point, can the process of threat intelligence gathering and sharing be properly automated, eliminating the compromise and remediation, and saving time and money for analysis.
- Real-time insights, not post-hoc panic: vSentry micro-VMs not only “protect first” but also collectively create an enterprise-wide sensor network that generates real-time threat intelligence that is enterprise- and user- specific, giving real-time insights to actual attacks that have been defeated, rather than false positives or successful compromises.
- No false positives: By relying on robust protection, it is possible to wait until a hardware-isolated attack actually compromises the software on an end point (as opposed to whatever software happens to be on a sacrificial VM in the network) – without risk. With proof of an actual attack, it is possible to eliminate the inevitable false positives that result from the FireEye approach – reducing the workload of the SOC team.
- Automated, enterprise-wide protection: When an attacker strikes, LAVA delivers accurate, complete forensic insights in real-time, in the open STIX/MAEC format, allowing automated enterprise-wide protection – blocking the attack at the perimeter, and updating signature based systems automatically, for example using System Center workflows, or integrations with leading vendors such as ForeScout.
Net, net, I think the bloom will come off this rose in the medium term, though I also think that the new FireEye is a powerful force to be reckoned with in the security ecosystem.
Yesterday, online giant Target disclosed that approx 40 million of their customers could be impacted by a breach. The stolen data is reported to include customer names, credit and debit card numbers, card expiration dates and the three-digit security codes located on the backs of cards. This seems like a huge breach involving a large number of users; is it the worst breach ever? No. But bad timing indeed for holiday shoppers. Historically, the holiday season is the feasting time for scamsters and attackers as the likelihood of exploiting unsuspecting buyers is much better.
To confirm that this is indeed bad news, today stellar investigative reporter Brian Krebs mentioned that the stolen credit cards are already in the underground market. This is clearly puts users in a quandary.
So this is Target’s fault? They probably have some blame to share – no doubt. Details of the exact cause are not yet public. However, the bigger problem is – we all know that this is likely to happen again as it has in the past.
The seasonal attacks rung a bell and I took a quick look at the last the few zero days during the holiday season, the numbers are indeed startling. In the past 9 years at least one zero day vulnerability in the wild has been acknowledged by Microsoft AFTER it compromised several people. Not surprisingly most of these are exploitable via the browser or documents.
Is this all a co-incidence? Many would agree that it isn’t. It’s likely that attackers stash up zero days and launch them during the holiday season. Simply put – when attackers launch attacks, they’re well aware that they’re playing a game of odds. Releasing an unknown vulnerability at the peak holiday season just increases their chances.
So what’s the cure? Surely you could go and pay only cash. In fact you’re even more secure if you just shut down internet altogether at home (you’d still be vulnerable to physical attacks though). However, if you’re reading this blog; then it’s most likely that this is not a viable option. Today we need to fight against these odds and yes, each of one us is THE Target.
Unfortunately, in the world of digital online security today – offense is easier than defense and the odds are against each of us. Our mission @bromium is to change those odds – significantly.
Have a great holiday season and stay safe!
In a recent post on ZDNet, Larry Seltzer makes the case that browser security has peaked, and argues that “that’s probably a good thing”. Browser security may well have peaked, but it’s definitely not a good thing. Adding new features can easily make it decrease, and it isn’t great to start with.
IE11 adds support for the oft vilified WebGL standard (security white-paper), which enables client gpu-assisted rendering of complex graphical elements, including games. Since graphical languages are basically programming languages, and require the client device to hand the graphics driver to the browser with no understanding of the “program”, you can see why Microsoft for a long time pushed back on the inclusion of WebGL in the browser. (As an aside, client-side graphical command remoting is a hot issue in the desktop virtualization arena, where Citrix and Microsoft vie to outdo each other in terms of the fidelity of hosted Windows apps/desktops. Fortunately in that arena there is an explicit trust relationship between the client and the server hosting the desktop/app.) To help protect against possible WebGL based attacks, Microsoft has added client-side sandboxing / sanity checking to its graphics drivers to mitigate the risks of this new interface. However, this is an area of great complexity, and the new interface, though presumably carefully tested by Microsoft, has yet to be fuzzed by malware writers. So I put this very decidedly in the category of “let’s wait and see”.
Seltzer’s argument is deceptively simple: In a nutshell, the vendors aren’t adding many new security features, and browser-based attacks are harder, so presumably we’re all OK? This is wrong for several reasons:
Browser vendors may well have done what they can to sandbox the browser, but that does not mean that the browser is a secure gateway to the scary web, or that users are safe. There are limits to the protection that software can offer, and they continue to add new features with large code bases:
- An attacker can bypass the browser by attacking the kernel directly, or escaping the sandbox from user-space via a vulnerable Windows service.
- The code surface of the browser and its plug-ins is huge. New code means new vulnerabilities, which will be found and exploited.
- In the specific case of IE, Flash is incorporated into the browser so that it can auto-update (making it more secure), but this exposes the browser to a massive third party code base and its latent vulnerabilities Here are some recent exploits for IE10 on Win8, which on their own utterly contradict the notion of Browser security completeness. Today the easiest (admittedly reliable) way to infiltrate into the enterprise is via Web browsers and emails. This is not just because of the ubiquity of Internet Explorer but also because of the large attack surface of browsers in general and the fact that all users need to run untrusted code every time they surf the web.
- To make things worse, scripting engines of browsers provide ample scope for exploit obfuscation and evasion to bypass traditional signature/detection based defenses, making it the best target ROI for malware deployment.
- Every browser version is touted to be ‘better’ and ‘more secure’ than the previous one – which in some cases is true. Microsoft undoubtedly is doing a great job in plugging off older ‘traditional’ exploitation vectors. However, we should not forget that it’s a war of adding functionality, backward compatibility and security in a vastly complicated code base. They are climbing their own ‘mount never-rest’. Here, for example are some stats for IE9 and IE10.
- Java, Java, Java. Enterprises require legacy JRE support on the client because server-side applications impose version compatibility requirements (and you gamers also need Java to support Minecraft). Even the latest Java releases have had issues. Malware that breaks the JVM bypasses the browser sandbox directly.
- The user can still trivially download, save and execute malware or poisoned content using the browser, leaving protection up to your AV suite (good luck!), the user, or IE protected-mode and UAC – otherwise known as “click to edit/trust/install/run…” and “do not show me this warning again..”
- The clipboard can be used to transfer attacks from unsafe content in the browser to other applications.
The browser vendors are doing good work – perhaps as much as one could ask for. But you’re still vulnerable – as Seltzer acknowledges: “Obviously users still get hacked through browsers, but that’s a different sort of problem, usually involving social engineering and no real software error. It’s just another way of saying we’ve done what we can securing the browser; now we have to secure the user and that may be impossible.” But this is not a good thing.
It is impossible to train the idiot user out of me. Worse, my browser cannot secure me and the OS is vulnerable. There is only one way to radically increase system security given the inevitability of an attack: massively decrease the attack surface of the end point using micro-virtualization. This is the only way to protect the end point from the idiot in me.
Last week’s shocking admission by Adobe that customer data, including encrypted credit card information for almost 3 million customer accounts, and source code for Adobe Acrobat, which is used to create electronic documents in the PDF format, as well as ColdFusion and ColdFusion Builder – ought to utterly terrify you. The breach itself, discovered as a result of painstaking sleuthing by Brian Krebs and and Alex Holden (ie: Adobe was unaware of the compromise) turns the spotlight on a security vendor that we have to trust absolutely, every time we open a document.
Beyond the theft of confidential customer data (which is not the subject of this post), the theft of Adobe’s source code is of grave concern: An attacker can now readily examine the Adobe sandbox for vulnerabilities, and write exploit code to specifically target them. Since these by definition would be vulnerabilities that Adobe itself has not spotted, there is nothing that Adobe can do beforehand to prepare for such attacks. Moreover, the massive installed-base of Adobe products means that even if new versions are quickly delivered to patch new exploits, millions of devices will still be vulnerable.
Interestingly, the problem facing Adobe is no different than the (ongoing) problem Oracle faces with Java. And while the Java bashers quickly picked up on the Apple led call for users to “disable client Java”, it really isn’t so easy this time around. Are you ready to ban PDFs from your organization? We need to remind ourselves, yet again, that the massive attack surface of all software sandbox technologies represents a another failed approach to securing the endpoint. Failure of the sandbox is catastrophic for system security.
So, how bad is it? We know that Adobe has adopted the Chrome sandbox, whose source code is already available in open source, developed and tested by a robust community, and further bolstered by Google’s bug-bounty program. In addition, our research has shown that Chrome is the best software sandbox available. While the Adobe implementation leaves the system open to attack in several more ways than Google Chrome, it is my view (contrary to popular hand-wringing) that the availability to attackers of the Adobe sandbox source does not present the industry with an immediate crisis or any radically increased threat. That said, it will be critical for Adobe and the Chrome team to rapidly respond in the event of evidence of attacks in the wild. The key lesson is this: security critical code should be available in open source, and it should be exposed to merciless, continuous testing and review by a community comprising diverse – even competitive – economic interests. It’s a better way to develop better code.
But let’s be generous to Google and Adobe. Let’s assume the sandbox is perfect. What’s scary is that software vendors pretend that sandboxing can help. Even a bug-free sandbox cannot protect a vulnerable endpoint. We have repeatedly shown that malicious code can readily bypass any Windows sandbox using well-known attacks that are impossible for sandbox vendors to protect against. The attacker has more determination and resources than any IT team. So we need a back-stop technology that is vastly more secure than any software containment technique.
Hardware-enforced security will be the next major leap forward in our ability to protect endpoints from attack. We see early signs of this in modern mobile devices, including Windows 8 tablets, which require a TPM (for an attested boot) in order to qualify for the Windows logo. Similar techniques for boot-time security checking are present in iOS. And hardware-enforced security will protect the runtimes of modern OSes and devices too. An early sign of this can be found in a recently granted Apple patent, which describes a way to segregate memory for a “scary application” such as a browser. And as far as I’m aware, the most advanced approach is micro-virtualization. Bromium uses the open source Xen hypervisor to implement micro-virtualization to protect an entire endpoint, but one could just as easily use it to protect a particular targeted application, such as a document renderer, that is vulnerable to attack.
Micro-virtualization, and Bromium vSentry in particular, offers a huge leap forward – making a device vastly more secure because it relies on hardware to contain and isolate untrusted tasks. Most importantly, micro-virtualization works on today’s PCs, laptops and even virtual desktop infrastructure. Once deployed, you can effectively banish concerns you might have about Oracle’s latest foibles, or Adobe’s latest mis-step. You can achieve practical, hardware-enforced security now, and stop worrying about the latest zero-day, browser vulnerability, PDF or Office-based attack. And then you can empower your users to be mobile and access the full power of the web.
While no system can ever be perfect, it is easy to show how vSentry makes an endpoint tens of thousands of times more resilient to attack. It doesn’t rely on pre-conceived notions of good or bad, it works with software sandboxes, and all of your existing investments in endpoint and network security. Dramatically increasing the cost to the attacker by massively reducing the attack surface of the system has the net effect of making it too expensive for an attacker to compromise a hardware protected endpoint.
There is no silver bullet in security, but there is a very simple set of steps you can take to achieve a massively more secure endpoint posture that allows you to relax in the face of continued software vendor missteps.