Skip to content
July 28, 2014 / Simon Crosby

In praise of seamless, small-footprint, light-weight, transparent endpoint security

In a recent blog, Rick Holland of Forrester Research takes aim at meaningless vendor epithets, such as “light-weight”, “non-invasive” and “small-footprint” used to describe their endpoint security products.  As he astutely observes, what vendor would claim otherwise?

As a recovering endpoint security administrator and against a backdrop of failed technologies like HIPS, Holland points out (and I’m sure desktop IT Pros would agree) that empowering the user is always #1 on the CIO’s list of priorities, and a solution that reduces a user’s PC to a crawl, or increases calls to the help desk, quickly negates any of its security benefits.

Holland reiterates key requirements for new technologies: (I’ve abbreviated and <summarized> them; here’s the original):

  1. “New endpoint solutions must show that they can be effective and transparent to users.”
  2. “The administrator’s experience of the solution is also important. UX enhance effectiveness. Scalability is another key consideration>..”
  3. “Some solutions focus on prevention (e.g. Bromium..) … But remember .. they must UX and empower the administrator>. Prevention is ideal, but assuming that adversaries will circumvent your controls, visibility is also important..”
  4. “Just because a solution says it can stop zero days, it doesn’t mean you’re safe. The adversary might target the solution itself … Remember, if it runs code, it can be exploited.”

He’s right of course – and every vendor knows it.   And his arguments identify a critical need – a set of empirical metrics that can help customers trade off cost, user empowerment, security and administrative scalability.

The only metrics available today are (useless) AV rankings based on the percentage of known-bad attacks they detect. (Any product that doesn’t detect 100% of known-bad should be free).  There is no way to gauge security of systems against unknown attacks.  There are also no consistently applied measures for UX or administrative scalability.  This makes it difficult to compare AV to new endpoint protection solutions, and almost impossible to trade them off against Endpoint Visibility and Control (EVC) products that really don’t secure the enterprise. Some reasons why:

  • Whereas AV saps performance from the whole system, micro-virtualization, for example, imposes no overhead on IT supplied apps but instead taxes each untrusted web site or document with the “imperceptible” (my epithet, from real customer feedback) overhead of a new micro-VM – about 20ms. How do we “measure” UX in such an environment?
  • If a sandbox is built into an application (eg: Adobe Reader), is the overhead accounted to the app, or to security, and how will we measure that? How do we measure user empowerment in a world of white-listing?  If the app is installed in a security sandbox that gives visibility but doesn’t really secure the endpoint, is this more valuable?
  • When we add EVC products to the mix, it gets harder: It’s easy for any product to deliver an unchanged UX if it doesn’t actually protect the endpoint. But what’s the point of an endpoint security solution that … isn’t?  Can tools that don’t protect the endpoint be compared to solutions that do? (in my view, no.) Is EVC a glorified “Enterprise Breach Detection”, simply measuring the time from compromise to detection? How do we compare that to endpoint protection mechanisms that defeat the attack?
  • Ultimately, EVC tools get an easy ride because they don’t have to protect the endpoint, yet they increase cost and complexity – they need vast databases that are expensive to acquire and run, and they don’t reduce the workload on IT staff who still have to flatten compromised endpoints and reinstall Windows while users strut about frustrated and unproductive.
  • What of user experience? Unlike the world of VDI where a benchmark performance metric such as LoginVSI can be applied consistently across vendor products, in endpoint protection no consistent metrics are available. At Bromium we are adapting LoginVSI to permit us to provide consistent metrics for UX across both native and virtual desktops.
  • How much security is enough? Even the most secure endpoint security solution can be compromised, but is there any evidence of successful attacks in the wild?  Is there evidence that pen-tests against a solution have been successful?  It is a slippery slope to argue theoretically that every security mechanism can be compromised, and that therefore detecting a breach is all that matters?

Ultimately I believe we need to assess the cost, per user, per year, to deliver a secure, productive endpoint. We should include the cost of IT personnel to deploy and manage the desktop, apps and endpoint security tools, and to remediate when an attacker succeeds. We should include the cost of user-downtime during remediation and the cost of all network appliances, servers and databases.  We need to measure  UX in a consistent way, with real workloads, and get real user input.

Ideally our criteria should allow us to trade off architectures.  For example: We could ask whether users would be more productive and secure with a better PC and robust endpoint security that protects them no matter what they click on, or on a cheaper device with an EVC solution that doesn’t stop attacks, and that requires remediation whenever it is attacked.  Ultimately, I believe, the criteria should also allow us to also account for the millions of dollars spent on proxies, network IDSs, storage and servers that play a role in endpoint security, and question their utility in the face of new endpoint solutions.

In summary, Rick has done us a favor by  calling out the vendor ecosystem for its use of meaningless epithets, I am optimistic the security industry can become more thoughtful to engage in meaningful discussion.  I fear, however, that we will have no choice but to continue to use them until there is a decent way to empirically measure our claims. I welcome the opportunity to work together to develop a robust set of metrics that will cut through the nonsense of vendor marketing – I have many more thoughts on the topic.

July 16, 2014 / Simon Crosby

Microvisor + Hypervisor Makes Your VMs Secure by Design

I often get asked whether micro-virtualization can be used with a traditional hypervisor and full-OS “fat” VMs (humor: FAT VMs are another matter).

YES! There are powerful benefits in both client and server scenarios. I’ll focus on the user centric uses that we currently support in vSentry:

  • VDI and DaaS: MakeVDI/DaaS secure without legacy security software that kills scalability andUX.
    • The Microvisor runs nestedin a VDI/DaaS desktop VM, running on top of a “root” hypervisor that virtualizes Intel VT/AMD-V. We’ve optimized micro-Xen to run nested on VMware ESX with virtual hardware 9/10 – the most widely deployed virtual infrastructure for VDI/DaaS under both Citrix XenDesktop and VMware View
    • None of XenServer, Hyper-V (WS12 & Azure), RHEL/KVM, or AWS supports nesting today, though the up-stream work in Xen is done. Props to Canonical for their nesting support in KVM/Ubuntu. Today all nesting is software-based. Hardware nesting via Intel VMCS-Shadow will start to arrive in server CPUs soon.
  • Client-hosted Virtual Desktops: Secure your BYO devices, developer desktops and Windows virtual desktops on a Mac or PC:
    • The Microvisor runs nested within a client-hosted Windows desktop VM (as for VDI/DaaS) on VMware Workstation or (on a Mac) VMware Fusion
    • Alternatively the Microvisor can run side-by-side with VMware Workstation/Fusion, effectively sharing VT/AMD-V. This allows us to secure a user desktop (Windows/OS-X) from attack – so that it can securely host an enterprise delivered VM. For this case we have two goals:
      1. Secure the host OS using micro-virtualization
      2. Also host additional full-OS VM(s) for desktop virtualization or test/dev. (with the option of protecting them too, using micro-virtualization)

This raises a killer question: Could a single hypervisor run OS VMs and micro-VMs?  YES!   Micro-Xen does this today (though not as a supported feature yet).

Fortunately (as a result of our collaboration with Microsoft starting in 2006, at XenSource), micro-Xen can run Windows VMs saved from Hyper-V in VHD format. I use this to demo the Bromium Management Server (BMS) in a WS12/SQL VM on my vSentry protected laptop.  If you’d like a detailed technical description of how this works, let me know.


July 15, 2014 / Simon Crosby

How do you spell “Polymorphic”?

I guess the answer is “i r o n y”:  Last week a Bromium field employee searched for “polymorphic” on and was treated to a gloriously literal definition: The site dropped a banking Trojan!

Although the user was unaware of the attack and continued working,  vSentry automatically isolated the attack, erased the malware and alerted Bromium HQ.  The report provided, in real-time, a detailed forensic trace of the malware as it executed, together with an encrypted manifest containing the malware itself.   This allowed the Bromium Labs team to immediately see what had happened.  The LAVA trace is shown below, as it “popped”:


The attack is incredibly noisy – reaching out to scores of C&C sites and DNS servers.   If we turn off visualization of the network traffic and use the tools in LAVA to identify malicious activity, we can immediately zoom in on the crux of the attack, which is pictured below.   The site invokes Java, injects shellcode, and downloads, drops and executes OBPUPDAT.EXE, whose MD5 hash is shown on the screenshot.   The attack also modifies 35 Registry settings to persist, sets a new browser proxy, and starts a process to capture keystrokes.


The attack is a variation on previously delivered banking trojans.  OBPUPDAT.EXE steals user account details and other information delivered to the browser, and captures user passwords.  It can also download malicious software and allow remote access to the compromised device.

The attack was delivered by on July 7th. The first AV vendor fix emerged on July 9th, but we don’t know how long the attack existed in the wild.     Virustotal has vendor signatures and analysis.


July 10, 2014 / clintonkarr

Detectible Dysfunction

In 2003, security industry analyst Richard Stiennon famously declared that intrusion detection systems would be obsolete by 2005, writing at the time:

“The underlying problem with IDS is that enterprises are investing in technology to detect intrusions on a network. This implies they are doing something wrong and letting those attacks in.”

To some extent, Stiennon was right, intrusion detection systems have become obsolete, yet his comment still remain relevant today. The NIST Cybersecurity Framework, published in October 2013, organizes five basic cybersecurity functions: identify, protect, detect, respond and recover. Three-fifths of this framework (detect, respond and recover) assume compromise will occur.

For the past ten years, threat detection has been a Band-Aid on a bullet wound. The good news is that the industry is finally starting to come around to this realization. Symantec has acknowledged that anti-virus is dead, detecting just 45 percent of cyber-attacks. The Target data breach serves as a cautionary tale since its threat detection systems alerted response teams that failed to prevent the breach.


What is the problem? Why is it so hard to make threat detection solutions work effectively? It turns out, there are a few reasons:

  1. Performance vs. security – Threat detection systems rely on signatures to catch cyber-attacks, but the more signatures an organization has enabled, the more performance takes a hit. Organizations face a dilemma, balancing performance and security, which typically results in partial coverage as some signatures are disabled to maintain performance.
  2. Management is time-consuming – The process of tuning signatures for threat detection solutions is labor-intensive and ongoing because new signatures are released all the time. If organizations don’t take the time to tune signatures, they generate more false positives, which creates a signal-to-noise ratio that results in real threats being overlooked.
  3. Management is error-prone – Once signatures create too much of a performance impact or the volume of false positives becomes too great, organizations tend to deploy threat detection systems in “alert only” mode. The issue with “alert only” threat detection is that it requires security response team to remain diligent, which the Target breach has demonstrated is virtually impossible.

Ten years later, Richard Stiennon is right, threat detection is obsolete, which is exactly why organizations are doing something wrong. Instead of focusing on detecting the attacks that get through, organizations need to focus on protection.

July 8, 2014 / Simon Crosby

If you had only one more security dollar…


what would you spend it on?   Improve endpoint security, or better protect your network or your applications?

This was the topic debated by three Gartner security analysts: Neil MacDonald (endpoint), Greg Young (network) and Joseph Feiman (application) at #GartnerSEC in DC, in June.

Watching Gartner analysts debate each other is fun – much more fun than watching them pontificate.  They live and die by their cred, so the gloves came off pretty early and they landed heavy blows on all three categories:

  • In spite of the promises of network security vendors it seems pretty easy for malware writers to bypass the state of the art network protection; Rapid growth in encrypted traffic will increasingly leave network security blind; High false positive ratios leave network security teams with floods of red-alerts; and even if an attack is detected, IT still has to remediate the endpoint.  Finally, both “cloud” and “mobility” make the enterprise network less relevant in both detection and attack prevention.
  • Application security is a pipe dream.   It’s been “almost ready” for ages, but it never seems to come closer to reality.  Reason: the complexity of modelling applications in a way that is semantically useful for security.  Moreover, the adoption of cloud and SaaS makes instrumentation of apps even less likely.
  • The endpoint is an unmitigated disaster with failed AV technologies and untrainable users who click on bad things. BYOD, mobility, PC/Mac… all make it worse.

Each analyst did his best to defend his turf too:

  • More hardware ought to solve the network crypto problem (my view: if at all feasible this will beexpensive); Better instrumentation and big-data analysis will help to reduce the challenge of picking out the needle from the haystack.  And, mobile users need to be forced onto the VPN.
  • New endpoint technologies, including isolation of untrusted execution, can transform the trustworthiness of the endpoint – which is responsible for >70% of enterprise breaches.   Alternatively, new approaches to endpoint detection (eg: searching for IOCs) can help to identify compromised systems quicker.
  • Application security could be “a big win”.   A practical approach is to dis-aggregate apps into multiple services in VMs, and to instrument each VM container to look for application-layer security anomalies.

But what of the original question – where can a CISO get the most value for her additional security dollar?

To my mind the answer is easy (if predictable): Micro-virtualization is a single solution that simultaneously addresses the biggest challenges in each of network, endpoint and app security:

  1. Micro-virtualization secures the endpoint – the source of > 70% of enterprise breaches – enabling it to protect itself by design from attacks that originate from the network or untrustworthy attachments or files on removable storage. It also automatically remediates malware.
  2. Micro-virtualization secures the enterprise network from end-point originated attacks. Malware that executes in a hardware-isolated micro-VM cannot access the enterprise network or any  high-value SaaS sites.   Malware can never use a client device to probe the enterprise network.
  3. Micro-virtualization secures vulnerable client applications and web-apps delivered to end users.   Each site or app is independently isolated, with no access to valuable data or networks – protecting the app from an attacked enterprise device/user, preventing credential theft and session hijacking.  It can also enforce key policies including use of crypto, restricting access to networks/sites, and enforcing DLP.

Micro-virtualization delivers the greatest security bang for the buck because this single solution solves the endpoint, network and application security problems for > 70% of enterprise breaches.

Add to this the fact that a micro-virtualized endpoint never needs remediation, protects itself even when using un-patched third party software, and renders a vast swath of kernel zero-day vulnerabilities irrelevant.

Finally, recognize that micro-virtualization empowers users to be productive anywhere, to click on anything, on any network, and – if the endpoint is attacked – it delivers precise, detailed forensic insights, in real time, without false alarms.

A dollar spent on micro-virtualization massively reduces the workload on the security team while making it better informed and strategically aligned with the objectives of the business.  It’s a no-brainer.

July 1, 2014 / Bill Gardner

The Dawn Of A New Era In Corporate Cyber Threats?



Cyber criminals know where the money is and have been attacking businesses in the hopes of getting a big payout for many years. Hacking and manipulating financial systems to steal money or customer credit and banking information to sell on the black market or stealing trade secrets to sell has been the traditional stock in trade of the black hat community. Successful attacks have been very costly to businesses and can run into the hundreds of millions of dollars for a large breach like the one suffered by Target in late 2013.

While a successful cyber attack can be costly, companies have been able to continue operations after a major breach. Despite additional investments in traditional security technologies the costs and frequency of successful attacks continues to rise. Many larger businesses have tried to offset this trend by investing in insurance coverage to help cover the costs of a successful cyber attack and reduce their overall risk. But this approach only makes sense if the business is able to continue to operate after the attack.

What if the hackers that attacked Target or E-bay managed to destroy the data they were able to access rather than just stealing the data while leaving it intact? What if a health care provider was to permanently lose all of their patient records, billing records and payroll records? How about the law firm the suddenly finds all of the client records have disappeared never to be recovered or the bank that no longer has any record of customer deposits? Would any organization survive the loss of such critical information? Would their disaster recovery and backup procedures protect them and insure the continuity of the business? Disturbingly the answer today is clearly “maybe” rather than “of course”.

For a hi-tech software hosting company by the name of Code Spaces the unthinkable has happened. Hackers penetrated their systems recently and rather than stealing information they demanded payment in exchange for not destroying their data. Code Spaces personnel attempted to determine the validity and extent of the compromise. The attackers detected these attempts and deleted the vast majority of their data as well as their backups and mirror sites. Management at Code Spaces announced that due to the scale of the loss and damage they had no choice but to cease operations and close their doors.

While this might be an isolated incident my instincts tell me that this is a watershed moment in the war between the criminals and the legitimate business community. Once the cyber criminal community at large realizes the power they can now wield there is no turning back. And can any business with a fiduciary responsibility to their stake holders take the chance that a cyber extortionist might follow through on their threats and destroy a company beyond recovery? Only time will tell.

June 25, 2014 / Simon Crosby

Chrome Perfected (2/2): Protect Users and Sites on the Web

In a previous post I described how Bromium makes Chrome fast and massively secure.   vSentry will always protect the endpoint from an attack via the browser – and the attack will be automatically remediated.

But the browser itself manages valuable personal and enterprise data that could be stolen if a hardware-isolated browser task is compromised.   In this post I show how vSentry mitigates these risks to protect enterprises and their users as they browse the web, effectively extending protection from the client to high value applications on the Intranet and the web, and enhancing privacy.

There are two ways micro-virtualization can help:

  1. Stop malware that seeks to use a compromised browser to penetrate deeper into the enterprise from accessing the Intranet and SaaS sites of value to the enterprise or the user (such as their bank).
  2. Stop an attack that compromises the browser (including man-in-the-browser (MIB) and cross-site scripting (XSS)) from stealing cookies, hijacking sessions for sites to which the user is currently logged on, and persisting unwanted cookies.

We rely on the “default deny” architecture of micro-virtualization: Granular, task-centric hardware-isolated micro-VMs and their virtual file-systems and virtual networks.

  • A micro-VM renders a site in an anonymized Windows environment with a random username, a minimal Registry, an empty Windows SAM and no hash to pass, ensuring that an attacker in a micro-VM cannot steal the user’s identity or enterprise credentials.
  • The virtual file system of a micro-VM allows us to precisely control what cookies and DOM storage are accessible to any site.
  • The virtual network of a micro-VM can only access IP services and networks that are permitted given the value of the isolated site {untrusted web, high-value SaaS, Intranet}.

To address the first problem, namely to protect enterprise networks and SaaS sites (and the user’s high value sites), vSentry applies a simple value-centric network policy: A micro-VM can never access a network/site of higher value than itself ( can never access my bank site,, or the Bromium Intranet).  Thus, if the user clicks on a malicious link that causes malware to execute in a browser micro-VM, there is no way for the malware to reach the corporate DNS, sites on the Intranet, any enterprise SaaS sites or (say) the user’s bank.  The virtual network in the micro-VM is completely unable to reach them.

To solve the second problem, namely an attack that attempts to hijack sessions or otherwise leverage a compromised browser in a micro-VM we need a more subtle approach.  Ideally we’d always create a new micro-VM for each site, and only allow it to access its own cookies – but the web doesn’t work like that:

  • Sites may need to share information via the browser.  For example, LinkedIn might allow me to log in using Facebook, and sites offering single sign-on need to pass credentials from the authenticating domain to be available to a second site.  If we prevent this, we risk “breaking the web”.
  • Sites may use 3rd party cookies to deliver legitimate content tailored to the user.  For example, the page “” contains code from as many as 30 other domains including advertisers, content providers and social networks.  To correctly render the browser must let those domains access their cookies stored on the endpoint.  Not doing so also risks “breaking the web”.

We want to protect the user without “breaking” the web.  To achieve this, vSentry automatically manages two kinds of trust relationships:

  • Between sites: By default vSentry implements a restricted sharing policy – only allowing sites that explicitly trust each other to be isolated together and share browser state.   For example when I log into, my login is also valid for a small clique of sites that salesforce explicitly trusts, including and  (You can also use policies to force each site into its own micro-VM, or to behave as Chrome does – allowing many sites to share a single micro-VM.)
  • Between the user and each site: vSentry controls what cookies are available (in the context of the micro-VM for) for each site the user visits.
    • A micro-VM rendering a specific site has no access to session cookies for other sites. This ensures that if the browser is compromised, the attacker cannot hijack logins to other sites.
    •  You can also prevent a site from accessing persistent cookies for other sites – only the cookies for the specific site being rendered are accessible in the micro-VM.  Content on the page that is provided by a 3rd party will be unable to access its cookies – effectively ensuring that the 3rd party site believes that it has never seen the user before.
    • Finally, you can decide whether or not persistent cookies dropped by a site are saved when the micro-VM that renders the site is destroyed.   If they are not saved, when the user next visits that site or a page with content from that site, there will be no record of the user’s previous interaction with the site.

Two capabilities –micro-VM virtual networking, and controlled access to cookies and shared browser state – allow us to extend protection beyond the endpoint.   Even if an isolated browser task is compromised vSentry protects networks and applications of value to users and the enterprise.



Get every new post delivered to your Inbox.

Join 16,608 other followers