Skip to content
September 11, 2014 / Simon Crosby

Goldilocks and the 3 Theres (1/2)

Goldilocks

 

At VMWorld VMware SVP of Security Tom Korn described the hypervisor and virtual network environment of a virtual infrastructure platform as the “Goldilocks Zone” for application security in the software defined data center.  He was right.  And with an innocuous and kid-friendly soundbite – “the Goldilocks Zone” – VMware served notice on the data center security industry that it fully intends to be the vendor of choice for ensuring the security of (private) cloud hosted applications.

This move ought not to surprise us.  Back in 2007 VMware opened up APIs for 3rd party security vendors, inviting security vendors to take advantage of the hypervisor to secure workloads.  But an ecosystem failed to emerge – in my view because neither VMware nor the vendors really knew how to take advantage of hypervisor based introspection, and because virtual switching was still very immature.

Fast forward 7 years to an enterprise virtual infrastructure that is dominated by VMware, and an urgent need for cloud security solutions.  VMware is firmly in control of the “Three Theres” that are required for precise control of workload security:

  • Execution context: The typical VM contains a single application, and relatively straightforward understanding of the application behavior, coupled with an ability to introspect the VM during execution offers an opportunity to better secure its execution.
  • Storage context: The hypervisor owns the storage of each VM. Historically this has been block storage a VMDK – but increasingly (for example with their CloudVolumes acquisition) layered storage for a guest comprising multiple VMDKs (and their file systems) mounted dynamically gives the hypervisor an ability to differentiate and control storage access (for example: writes to a CloudVolumes app VMDK could be prevented or made Copy on Write). As it moves up-stack, the hypervisor has an opportunity to introspect and understand file/volume semantics – for example think about the ability to separate the user data and settings in a VDI VM.
  • Network context: The vSwitch has an ability to control and inspect traffic into a VM in a granular fashion. VMware calls these application-centric network controls “micro-services”.  Each application can have unique network security controls applied to it, enhancing the security not only of that workload, but of the private cloud in aggregate. Moreover, because of its proximity to the locus of execution the vSwitch can inspect traffic in ways that are inaccessible to other vendors in the data center ecosystem.

There would be no “Goldilocks” story without the 3 Bears and the concept of “just right”.   Similarly, there can be no cloud security story without the Goldilocks Zone – a place where execution can be inspected and controlled from each of the 3 “theres”: execution, storage and networking.  Being in full control of all of them is “just right” for delivery of a new generation of cloud security services.  It is interesting to note that the addition (via nesting – see part 2) of micro-virtualization on a traditional hypervisor like ESX provides even more granular isolation and control – for each VM, and therefore even more granular control of security.

The “Goldilocks Zone” of security is a unique opportunity for VMware to be the vendor of choice to secure virtualized workloads in the increasingly software defined data center.  None of the other hypervisor vendors is even close in terms of articulating as bold a vision in micro-services, granular storage control and execution control – and hence security. This differentiation is a key strength of VMware’s, and at the same time it points to the end of the road for every traditional datacenter security vendor.  We all know that AV is dead.   We know that a hypervisor is a better place to ensure execution white-lists are enforced, rather than in-kernel.  We now also need to realize that network security appliances will be on the block, together with traditional switching/routing gear.

Part 2 of this post will describe micro-virtualization, micro-services for micro-VMs and micro-VM introspection in more detail.  The similarities are startling.  The conclusion even more so: Virtualization alone (SDDC and PC) has a unique and profound ability to deliver a paradigm shift in enterprise security, securing the enterprise by design.

September 8, 2014 / Simon Crosby

Next-Gen IDS/IPSs: Caught between a ROC and a hard place

The market appears to have revisited its irrational exuberance about next-gen network IDS/IPSs, perhaps because every major security vendor has one (truth be told, throwing traffic at a set of cloud- or appliance-hosted sacrificial VMs isn’t rocket science).

But there’s another challenge too: these devices are caught between a ROC and a hard place: They often overwhelm IT with false alerts and (provably) will fail to detect some genuine attacks. So it is important to understand their strengths and weaknesses and to carefully plan their use.

The tech:  Potentially threatening traffic entering the network is forwarded to a VM running on the appliance.  The idea is that if it contains malware, the attacker will compromise the VM and the appliance will detect this and  alert the security team.  Typically only a subset of traffic is forwarded to a VM because attempting to execute all traffic in a small number of honeypot VMs is typically not (practically or economically) feasible.

  • In passive mode (IDS), the appliance reports information that can help security teams identify a compromised user device, whereas
  • In in-line mode (IPS) the appliance must decide in real-time whether the traffic contains malware or not. It blocks the connection if an attack is detected.   If not, it passes the traffic to the client.

If the malware is on an existing black-list (eg: VirusTotal) detection is easy, but if not, detection depends on  the vendor’s “advanced” detection capabilities. Here’s the rub:

  • If the user is off-net or mobile, the next-gen IDS/IPS will likely be blind to their activity.
  • Sophisticated malware is often crypted to ensure that it will bypass existing black-list (signature based) detection methods. So, if the bad guy is determined to get in, the standard detection tools won’t help. (The same is true for endpoint AV).   So, most vendors claim “advanced execution detection” that aims to identify tell-tale signs of unknown malware when it executes on the appliance.
  • Sophisticated malware is often “sleepy” – and next-gen IDS-aware.  It can detect that it is running in a VM and simply waits (sleeps) until it reaches an actual endpoint before executing its attack. A next-gen IDS/IPS will therefore fail to detect an attack.
  • An alert issued by the IDS/IPS for malware that executed on the device relies entirely on the malware actually executing in a honeypot VM.  Key questions to ask the vendor include how you can ensure that the software on the appliance is the same as the software on your endpoints.  If it isn’t precisely the same, then the appliance is basically useless.  You may see floods of alerts for attacks that would never execute on your endpoints given their particular patch levels.
  • Finally, several vendors ship their own versions of Windows VMs on their appliances.  As Richard Stiennon has pointed out, this likely conflicts with Microsoft’s license terms.  You should ensure that your vendor indemnifies your company for any future licensing problems.

Detection’s Limits

Ultimately, next-gen IDS/IPS platforms are detection centric, and detection has fundamental limits that are mathematically provable.  Stick with me – I’ll try to make the theory simple to understand (Here’s a primer, and some state-of-the-art research).

A detector must be evaluated for accuracy by evaluating the frequency of its {True Positive, True Negative, False Positive, False Negative} results:

  • TP: The frequency of samples where an attack was correctly identified
  • TN: The frequency where a non-attack was correctly identified
  • FP: The frequency of false alarms, and
  • FN: The frequency of a real attack bypassing the detector.

These can be plotted on a graph called the Receiver Operating Characteristic (ROC), and can be shown as the areas of intersection of two statistical distributions that plot the the detection result for both non-attack traffic and real attacks.

roc1

Every detector has a threshold at which it will trigger an alarm (the green line).  A better detector separates the two curves more cleanly, and careful choice of the threshold is critical for accurate separation of real attacks from normal traffic.  The goal is to accurately detect attacks, without increasing False Positives or False Negatives, but no detector is perfect:

  1. The detector will fail (FN) at some point and the attacker will succeed. (Yep, it’s a definite)
  2. Building a good detector is a careful balance of trading off false positives (which leave security teams swamped) against false negatives (which are very bad news).
  3. Unfortunately today’s rapidly moving cyber-landscape it is impossible to build a reliable detector for polymorphic/crypted malware:

“The challenge of signature–based detection is to model a space on the order of 2^(8n) signatures to catch attacks hidden by polymorphism. To cover thirty-byte decoders requires O(2^240) potential signatures; for comparison there exist an estimated 2^80 atoms in the universe.”

The Result: “Compromise-first Detection”

“Compromise-first detection” happens when a detector is unable to distinguish between attack and non-attack traffic, causing significant overlap of the two distributions , as shown below.  The ratio of the TPF to FPF is sometimes called the Signal to Noise Ratio (SNR).  A low SNR loses True Positives in a sea of False Positives, training IT to ignore warnings.

roc2

Compromise-first detection is a very big deal. Delays in signature distribution together with detector inaccuracy aid attackers, and the cost of remediation is high: all systems that might have been penetrated must be re-imaged – and if the alert is a false positive, the entire exercise is a waste of time.

The net-net for any network-based detection technology is that it likely:

  • Costs a lot more to run (in terms of increased operational headcount and complexity) than the sticker price on the box.
  • Doesn’t stop attacks that it detects – because operating such appliances inline impacts performance substantially.
  • Doesn’t deliver alerts that are meaningful given the patch level of your endpoints
  • Cannot stop the compromise

Wouldn’t it be so much better if endpoints could simply defeat each attack, accurately inform IT without false alarms, and self remediate?  Well, they can!

September 3, 2014 / clintonkarr

Black Hat Survey: End Users Remain Biggest Security Headache as Compromised Endpoints Increase

Earlier this year, Bromium published “Endpoint Protection: Attitudes and Opinions,” a statistical analysis of more than 300 information security professionals. The results revealed that endpoints are vulnerable, anti-virus is ineffective and end users are a weak link.

These results were significant, so earlier this August, Bromium conducted a similar survey at Black Hat. Our Black Hat survey was a poll of less than 100 respondents, so these results may be considered less statistically significant; however, they are still interesting.

Man having a headache at home

Similar to our previous research, Bromium found that nearly 75 percent of respondents believe that end users are their biggest security headache. As noted previously, the Verizon Data Breach Intelligence Report found that 71 percent of breaches were a result of an attack on end user devices, so these results should come as no surprise.

User devices can be compromised in a moment by drive-by downloads, system vulnerabilities and e-mail attachments, a challenge is only exacerbated by mobile workers connecting to untrusted networks, yet it can be time-consuming and expensive for information security teams to fix these problems. The alternative, locking down system resources, is not a popular option because it greatly reduces productivity with a negative user experience.

Are users your biggest security headache?

 

Yes                                         74%

No                                          14%

Don’t Know                         11%

 

 

It is easy to understand why end users are such a headache when you consider the results of some of the other questions that were asked. Case in point: Bromium research determined that the total number of compromised endpoints has increased for the majority of respondents in the past 12 months.

 

In the past 12 months, has the total number of compromised endpoints in your organization:

 

Increased                             51%

Stayed the

same                                     34%

Decreased                           14%

 

 

These compromised endpoints create additional work for information security professionals since they have to be cleaned and remediated, which results in lost productivity for both the users and admins. Investing in anti-virus solutions is not enough, as respondents indicated they had to remediate compromised endpoints that had anti-virus on a monthly, weekly or even daily basis.

In the past 12 months, how frequently have you had to remediate a compromised endpoint that had anti-virus installed?

 

Monthly                                34%

Weekly                                  29%

Daily                                      20%

Never                                    14%

Not Sure                               3%

 

 

Ultimately, the reason that end users are such a headache for information security professionals is because endpoint protection solutions, such as anti-virus, are so ineffective. The majority of respondents believe their endpoint protection detection rates are less than 50 percent, which would explain why the overwhelming majority of respondents are also not confident in the ability of their current endpoint protection solution to detect unknown threats.

 

What are your current endpoint protection detection rates?

 

Less than 25 percent        23%

Between 25 and 50

percent                                 34%

Between 50 and 75

percent                                 34%

More than 75 percent        9%

 

 

Are you confident in the ability of your current endpoint protection solution to detect unknown threats (e.g. zero-day attacks) 

Yes                                         34%

No                                          66%

 

 

Symantec has declared that antivirus “is dead.” You have to agree when you consider these poor detection rates. Endpoint protection is a multi-billion dollar industry, yet security professionals are not confident in these solutions.

End users will remain a primary target for attacks because of the value they hold. Therefore, the market must adapt to meet the demands of a post-AV era. A defense-in-depth architecture can be limited by a common vulnerability in the Windows kernel; indeed, Bromium Labs refers to this as LOL (layers on layers). Instead, organizations should invest in complimentary advanced threat protection solutions.

Bromium vSentry and LAVA provide an advanced threat protection suite that delivers proactive endpoint protection for the post-AV era. Bromium vSentry isolates all tasks in micro-virtualization to contain all threats, while Bromium LAVA provides real-time visibility and analytics. Bromium micro-virtualization enforces security by design, instead of relying on signatures to detect the undetectable. Bromium is returning confidence to endpoint protection solutions.

August 14, 2014 / Dan Wolff

The Rise and Fall of Enterprise Security

Every day, enterprises are bombarded by rapidly multiplying and morphing advanced threats—and current network and endpoint security solutions aren’t capable of defeating these targeted attacks. This year a major IT analyst wrote: “Advanced targeted attacks are easily bypassing traditional firewalls and signature-based prevention mechanisms. All organizations should now assume that they are in a state of continuous compromise.”

The fundamental problem with security today is the legacy operating systems and applications we use today were developed with little concern about the potential for introduction of hostile or “untrustworthy” applications or data. Unfortunately these systems have not kept pace with the growth in connectivity, and our computer systems still have no way to decide whether a document or an application is trustworthy or hostile. Malware continues to exploit the interaction between and within the software installed on a system to achieve its goals with little protection provided by the system itself.
To compensate, the entire IT security industry responded by developing new technologies to mitigate the threat of the day, whether its sandboxing, whitelisting, host web filtering or the latest trend in network sandboxing to identify threats already in the network (see chart below). The growth in security spend is up 294% since 2006 to $21B (source Gartner), while the reported data breaches have exploded, where there were 614 reported breaches in North America, disclosing over 91M records.

graph of bubbles

2013 614 reported breaches, 91,982,172 records

IT has had no choice but to assert control over users – and the networks, applications, media, websites, and documents they use. Every day companies deploy a unique mix of endpoint and network technologies that are without fail complex, expensive and many times require adding staff just to run them. This approach is imperfect and will surely fail: productive employees must collaborate and communicate and they often create their own “shadow” infrastructure. When this happens, a single click can lead to the next major cybersecurity breach. It is provably impossible to protect the enterprise against the unknown, undetectable zero-day attack with traditional, legacy cybersecurity tools.

The fact is that users are still getting infected with APTs and other malware, in spite of all of this spending. Looking at the following Virus Bulletin report, you can see how today’s antimalware products get an “F” grade for protection:

vb

https://www.virusbtn.com/vb100/rap-index.xml

…and these are not advanced threats! I talk to many customers who say their overall protection rate is under 50%….meaning over 50% of threats get past their current defenses!

How is this happening? Malware is now designed to evade detection. By leveraging zero day exploits, polymorphism and the rapid evolution of web technology, malware evades “detection” based security solutions and infiltrates the organization by exploiting the inherent trust between operating system components. It may be weeks or months before a successful attack is discovered. Meanwhile valuable information can be stolen or critical infrastructure can be disrupted by the attackers.

Here is a brief overview of key protection technologies and their limitations in dealing with modern attacks.

Intrusion prevention system (IPS)
(IBM, McAfee Network Security Platform, Cisco, et al) Defends networks against known attacks that have signatures by detecting and blocking in the network datastream. Includes some behavioral detection for certain threats. Limitations:

• Can’t block without a signature.
• Needs to be implemented at every ingress/egress access point.
• Costly, complex, and noisy, especially for geographically distributed networks.
• Absolutely no protection for mobile users outside of the network.
• They are mostly signature based, but rely on some behavioral tools.
• Encryption of network traffic stream can essentially blind network IPS.
• Network admins HATE to have more bumps in the line and IPS adds a bump.

Network Sandboxing
(Dhamballa, FireEye, McAfee, et al) Detects infiltrations from targeted attacks, after the attack is in the network. Limitations:

• Does not stop or remediate threats to endpoints.
• Costly and noisy.
• Requires expert-level security personnel constantly monitoring events. (See the Target breach for a prime example)

Web content filtering
(Websense, McAfee, BlueCoat, et al) Blocks access to known malicious websites to protect against web exploits and Trojan attacks. Limitations:

• Only blocks known malicious IP addresses.
• Needs to be implemented at every ingress/egress access point.
• Protection is diminished for mobile users and partners accessing retail network.

NAC
(Forescout, Bradford Networks, Cisco, et all) Ensure only ‘clean’ systems access the network. Quarantine vulnerable systems and enforce network segmentation. Limitations:

• Complex to deploy and manage.
• False quarantines are common and cause major headaches and IT calls.
• Does not deal with remote users.

SIEM
(McAfee, HP, IBM, et al)
Real-time SOC alerting, integrated endpoint intelligence. Limitations:
• Creates copious amounts of data that must be interpreted in to actionable intelligence.

Endpoint Antivirus and other detection-based solutions
(Symantec, McAfee, Kaspersky, Trend Micro, Sophos, et al) Detect known threats on endpoints. Limitations:

• Cannot keep up with the rapid influx of new threats and variants.
• Can’t block without a file signature or behavioral rule.
• Only known threats or behaviors
• Many false positives
• Remediation usually required even if threat is detected
• Limited attack intelligence

Host intrusion prevention systems (HIPS)
(Symantec, McAfee HIPs, et al) Intercepts many zero day attacks in real time by detecting common behaviors. Limitations:

• Has a chance to catch a zero day attack, but can still miss many advanced threats
• High operations overhead to configure and maintain.

Hardware enhanced detection (McAfee Deep Defender) Loads as a boot driver and looks for rootkit behaviors before the OS loads. Limitations:

• Only detects/blocks some kernel mode rootkits. Does not block user mode rootkits.
• Consumes ~10% of CPU cycles while providing limited protection.

Application whitelisting
(Bit9, McAfee Application Control) Controls which applications are allowed to install and run on an endpoint by matching authorized programs (the whitelist) to a database of “good” applications. Can be an effective way to block execution of malicious executables. Limitations:

• Blocks users from downloading and using new tools and programs without IT involvement.
• Not integrated with other security tools, is hard to manage and requires business process changes. Also requires a large database of known good applications.
• Successful on servers, which don’t change often, but is largely unusable on end-user systems.
Software Sandboxing
(Invincea (Dell Protected Workspace), Sandboxie, Trustware)
Creates a “sandbox” environment within the Windows OS to analyze execution of untrusted applications. Restricts the memory and file system resources of the untrusted application by intercepting system calls that could lead to access to sensitive areas of the system being protected. Limitations:

• Advanced malware can bypass any sandbox to take advantage of kernel mode vulnerabilities.
• User-mode malware can escape from any sandbox, permitting it to elevate its privileges and disable or bypass other forms of endpoint protection and compromise endpoints, including data theft.
• Changes the user experience, causing support calls and training requirements.

Hardware enabled isolation via micro VM
(Bromium) Isolates every user task in a hardware-based micro-virtual machine (micro-VM). Limitations:

• No known limitations in defeating zero day kernel exploits
I should also mention: End-users have emerged as the weak link in enterprise security. With the proliferation of web, email and social communication, users are one click away from compromising their desktop. Mobile laptop users are further exposed as they have limited protection from the corporate network based security mechanisms. Current defenses can be cumbersome to use and manage. All too frequently employees are given admin rights to enable their free use of any software. ..unfortunately this also gives attackers a leg up when going after critical information like credit card numbers and intellectual property.
There is a better way forward

Patching can never keep up. Nor can detection. Or humans for that matter. The Bromium architecture offers the first ever approach that turns the received wisdom of the security industry on its head: Bromium vSentry® uses proprietary micro-virtualization technology to isolate content delivered via Internet browsers, documents, email, and more. Malware that may enter the Bromium Micro-VM® through vulnerable applications or malicious websites is unable to steal data or access either the protected system or the corporate network and is automatically discarded when the web session or document is closed by the user.

Task-level isolation means you can ignore browser vulnerabilities

Bromium vSentry automatically and instantly isolates vulnerable user-initiated tasks, such as opening an unknown web page in a new browser tab or an email attachment from an unknown sender. It can create hundreds of micro-VMs dynamically, in real time, on an endpoint. Users are not prompted to “allow” or “deny” actions and can focus on getting the most from their system without worrying about threats. The end point will self-remediate, discarding all changes made by the task, automatically. No need to rush out untested patched, impractical browser usage policies or new technologies that are known to be vulnerable. In short, you can relax knowing that any threats are isolated.

Its time to stop the merry-go-round and head scratching and gain control of your infrastructure.

To learn more about Bromium’s game-changing security architecture, please visit http://www.bromium.com.

July 28, 2014 / Simon Crosby

In praise of seamless, small-footprint, light-weight, transparent endpoint security

In a recent blog, Rick Holland of Forrester Research takes aim at meaningless vendor epithets, such as “light-weight”, “non-invasive” and “small-footprint” used to describe their endpoint security products.  As he astutely observes, what vendor would claim otherwise?

As a recovering endpoint security administrator and against a backdrop of failed technologies like HIPS, Holland points out (and I’m sure desktop IT Pros would agree) that empowering the user is always #1 on the CIO’s list of priorities, and a solution that reduces a user’s PC to a crawl, or increases calls to the help desk, quickly negates any of its security benefits.

Holland reiterates key requirements for new technologies: (I’ve abbreviated and <summarized> them; here’s the original):

  1. “New endpoint solutions must show that they can be effective and transparent to users.”
  2. “The administrator’s experience of the solution is also important. UX enhance effectiveness. Scalability is another key consideration>..”
  3. “Some solutions focus on prevention (e.g. Bromium..) … But remember .. they must UX and empower the administrator>. Prevention is ideal, but assuming that adversaries will circumvent your controls, visibility is also important..”
  4. “Just because a solution says it can stop zero days, it doesn’t mean you’re safe. The adversary might target the solution itself … Remember, if it runs code, it can be exploited.”

He’s right of course – and every vendor knows it.   And his arguments identify a critical need – a set of empirical metrics that can help customers trade off cost, user empowerment, security and administrative scalability.

The only metrics available today are (useless) AV rankings based on the percentage of known-bad attacks they detect. (Any product that doesn’t detect 100% of known-bad should be free).  There is no way to gauge security of systems against unknown attacks.  There are also no consistently applied measures for UX or administrative scalability.  This makes it difficult to compare AV to new endpoint protection solutions, and almost impossible to trade them off against Endpoint Visibility and Control (EVC) products that really don’t secure the enterprise. Some reasons why:

  • Whereas AV saps performance from the whole system, micro-virtualization, for example, imposes no overhead on IT supplied apps but instead taxes each untrusted web site or document with the “imperceptible” (my epithet, from real customer feedback) overhead of a new micro-VM – about 20ms. How do we “measure” UX in such an environment?
  • If a sandbox is built into an application (eg: Adobe Reader), is the overhead accounted to the app, or to security, and how will we measure that? How do we measure user empowerment in a world of white-listing?  If the app is installed in a security sandbox that gives visibility but doesn’t really secure the endpoint, is this more valuable?
  • When we add EVC products to the mix, it gets harder: It’s easy for any product to deliver an unchanged UX if it doesn’t actually protect the endpoint. But what’s the point of an endpoint security solution that … isn’t?  Can tools that don’t protect the endpoint be compared to solutions that do? (in my view, no.) Is EVC a glorified “Enterprise Breach Detection”, simply measuring the time from compromise to detection? How do we compare that to endpoint protection mechanisms that defeat the attack?
  • Ultimately, EVC tools get an easy ride because they don’t have to protect the endpoint, yet they increase cost and complexity – they need vast databases that are expensive to acquire and run, and they don’t reduce the workload on IT staff who still have to flatten compromised endpoints and reinstall Windows while users strut about frustrated and unproductive.
  • What of user experience? Unlike the world of VDI where a benchmark performance metric such as LoginVSI can be applied consistently across vendor products, in endpoint protection no consistent metrics are available. At Bromium we are adapting LoginVSI to permit us to provide consistent metrics for UX across both native and virtual desktops.
  • How much security is enough? Even the most secure endpoint security solution can be compromised, but is there any evidence of successful attacks in the wild?  Is there evidence that pen-tests against a solution have been successful?  It is a slippery slope to argue theoretically that every security mechanism can be compromised, and that therefore detecting a breach is all that matters?

Ultimately I believe we need to assess the cost, per user, per year, to deliver a secure, productive endpoint. We should include the cost of IT personnel to deploy and manage the desktop, apps and endpoint security tools, and to remediate when an attacker succeeds. We should include the cost of user-downtime during remediation and the cost of all network appliances, servers and databases.  We need to measure  UX in a consistent way, with real workloads, and get real user input.

Ideally our criteria should allow us to trade off architectures.  For example: We could ask whether users would be more productive and secure with a better PC and robust endpoint security that protects them no matter what they click on, or on a cheaper device with an EVC solution that doesn’t stop attacks, and that requires remediation whenever it is attacked.  Ultimately, I believe, the criteria should also allow us to also account for the millions of dollars spent on proxies, network IDSs, storage and servers that play a role in endpoint security, and question their utility in the face of new endpoint solutions.

In summary, Rick has done us a favor by  calling out the vendor ecosystem for its use of meaningless epithets, I am optimistic the security industry can become more thoughtful to engage in meaningful discussion.  I fear, however, that we will have no choice but to continue to use them until there is a decent way to empirically measure our claims. I welcome the opportunity to work together to develop a robust set of metrics that will cut through the nonsense of vendor marketing – I have many more thoughts on the topic.

July 16, 2014 / Simon Crosby

Microvisor + Hypervisor Makes Your VMs Secure by Design

I often get asked whether micro-virtualization can be used with a traditional hypervisor and full-OS “fat” VMs (humor: FAT VMs are another matter).

YES! There are powerful benefits in both client and server scenarios. I’ll focus on the user centric uses that we currently support in vSentry:

  • VDI and DaaS: MakeVDI/DaaS secure without legacy security software that kills scalability andUX.
    • The Microvisor runs nestedin a VDI/DaaS desktop VM, running on top of a “root” hypervisor that virtualizes Intel VT/AMD-V. We’ve optimized micro-Xen to run nested on VMware ESX with virtual hardware 9/10 – the most widely deployed virtual infrastructure for VDI/DaaS under both Citrix XenDesktop and VMware View
    • None of XenServer, Hyper-V (WS12 & Azure), RHEL/KVM, or AWS supports nesting today, though the up-stream work in Xen is done. Props to Canonical for their nesting support in KVM/Ubuntu. Today all nesting is software-based. Hardware nesting via Intel VMCS-Shadow will start to arrive in server CPUs soon.
  • Client-hosted Virtual Desktops: Secure your BYO devices, developer desktops and Windows virtual desktops on a Mac or PC:
    • The Microvisor runs nested within a client-hosted Windows desktop VM (as for VDI/DaaS) on VMware Workstation or (on a Mac) VMware Fusion
    • Alternatively the Microvisor can run side-by-side with VMware Workstation/Fusion, effectively sharing VT/AMD-V. This allows us to secure a user desktop (Windows/OS-X) from attack – so that it can securely host an enterprise delivered VM. For this case we have two goals:
      1. Secure the host OS using micro-virtualization
      2. Also host additional full-OS VM(s) for desktop virtualization or test/dev. (with the option of protecting them too, using micro-virtualization)

This raises a killer question: Could a single hypervisor run OS VMs and micro-VMs?  YES!   Micro-Xen does this today (though not as a supported feature yet).

Fortunately (as a result of our collaboration with Microsoft starting in 2006, at XenSource), micro-Xen can run Windows VMs saved from Hyper-V in VHD format. I use this to demo the Bromium Management Server (BMS) in a WS12/SQL VM on my vSentry protected laptop.  If you’d like a detailed technical description of how this works, let me know.

 

July 15, 2014 / Simon Crosby

How do you spell “Polymorphic”?

I guess the answer is “i r o n y”:  Last week a Bromium field employee searched for “polymorphic” on dictionary.com and was treated to a gloriously literal definition: The site dropped a banking Trojan!

dict1
Although the user was unaware of the attack and continued working,  vSentry automatically isolated the attack, erased the malware and alerted Bromium HQ.  The report provided, in real-time, a detailed forensic trace of the malware as it executed, together with an encrypted manifest containing the malware itself.   This allowed the Bromium Labs team to immediately see what had happened.  The LAVA trace is shown below, as it “popped”:

dict2

The attack is incredibly noisy – reaching out to scores of C&C sites and DNS servers.   If we turn off visualization of the network traffic and use the tools in LAVA to identify malicious activity, we can immediately zoom in on the crux of the attack, which is pictured below.   The site invokes Java, injects shellcode, and downloads, drops and executes OBPUPDAT.EXE, whose MD5 hash is shown on the screenshot.   The attack also modifies 35 Registry settings to persist, sets a new browser proxy, and starts a process to capture keystrokes.

dict3

The attack is a variation on previously delivered banking trojans.  OBPUPDAT.EXE steals user account details and other information delivered to the browser, and captures user passwords.  It can also download malicious software and allow remote access to the compromised device.

The attack was delivered by dictionary.com on July 7th. The first AV vendor fix emerged on July 9th, but we don’t know how long the attack existed in the wild.     Virustotal has vendor signatures and analysis.

 

Follow

Get every new post delivered to your Inbox.

Join 17,935 other followers