Client-Side Virtualization Security at Warp Speed!

Author: No Comments Share:
  • Virtualization based security stops what next-gen antivirus misses!
  • Now you can have virtualization based security & peak performance.
  • With more than one billion micro-VMs launched, we’ve had no reported breaches.

In my last article, I discussed how Bromium has made some major breakthroughs in client-side virtualization performance and that virtualization based security is now ready for prime time.

Now let’s lift the hood and check out how the latest virtualization-based security, powered by Bromium, actually performs.

With the release of Bromium 3.2 Update 5 we have significantly reduced the resource footprint of Bromium virtualization and improved the user experience. With each major release of Bromium over the last two years, we have significantly reduced the resource footprint. Back in July 2016, when we released version 3.2 with our initial support for Windows 10 (Windows 7 / 8.1 were already supported), it was the fastest and best performing edition of Bromium ever. Now, with the release of 3.2 Update 5 in November we have made a quantum leap forward!

Check out some of the highlights…

Performance at Load: 3.2 vs. 3.2 Update 5

  • 36% reduction in CPU usage using default policies
  • 61% reduction in CPU usage using enhanced policies
  • 17% reduction in Memory usage using default policies
  • 24% reduction in Memory usage using enhanced policies

As you can see, we have made some major improvements and these improvements are in addition to all the performance gains that came with the original release of 3.2!  Now let’s dig a little deeper.

The Testing Process

Before I dive deeper into the numbers and how we were are able to squeeze out so much additional performance, let’s review our testing process. In my previous life architecting and implementing some of the largest VDI systems in the world, I spent a lot of time automating performance tests and conducting scalability analysis. From a testing methodology perspective, the following are critical.

  • Test should be fully automated so that it is consistent and repeatable
  • Test should perform the same tasks as a real user the way a real user would
  • Test needs to run long enough to capture real data over time

Automating the test is critical so that each test run is predictable and consistent. Multiple test runs must be performed to ensure that results are consistent and valid. For my tests I use a free and popular tool called AutoIT. The test is fully automated to remove human variability. Using AutoIT I created a 40+ minute workflow that runs multiple applications and accesses multiple websites at a rate and in a manner that a real user would perform during typical peak activity. Below is a high-level summary of the workflow.

  • Runs Outlook & Skype
  • Opens Adobe PDF documents
  • Works with Word, Excel and PowerPoint documents
  • Browses numerous websites with up to 7 concurrent tabs using IE 11

Approximately half of the automated test involves the use of IE 11 with the other half working with Outlook, Office or PDF documents. Detailed Perfmon counters are collected to determine total resource consumption during the test. I have an extensive fleet of laptops (thanks to eBay!) that I use to run the tests against various OS, CPU, HD, & RAM configurations.

Through a series of upcoming articles and videos we will be sharing with you the performance numbers from multiple scenarios. These tests include devices from a 6+ year old laptop with first generation i3 processor and 4GB RAM running Windows 7 to newer i5 and i7 based laptops running Windows 10 with as much as 8 GB RAM, and even VDI hosted desktops accessed from thin clients, so stay tuned!

Test System Specifications

For the performance data that I am going to present here in this article, we used the following system.

  • Lenovo T440s
  • i5 4300U 1.9 GHz CPU
  • 8GB RAM
  • 240 GB SSD
  • Windows 10 with Office 2016

As mentioned previously, I ran the 40+ minute AutoIT workload against the system multiple times under multiple configurations to analyze performance. I tested the following scenarios…

  • No Bromium
  • Bromium 3.2 Default Policies
  • Bromium 3.2 Update 5 Default Policies
  • Bromium 3.2 Update 5 with Tracking Protection Policy

For the Bromium enabled scenarios, all PDF, Word, Excel, PowerPoint and IE 11 websites were marked as untrusted and were isolated using Bromium micro-virtualization technology. This means that the majority of the activity performed in the test occurred inside Bromium protected micro-VMs. We focused the test on untrusted content so that the Bromium virtualization engine would get a heavier than normal workout. This allows us to truly see the performance impact and resource consumption of Bromium micro-virtualization in a worst case scenario. In a typical production environment, there will be many websites and internal Office and PDF documents that are marked as trusted and will run natively on the device outside of a Bromium protected VM. For this reason in production customer environments, the Bromium resource impact will often be lower than what is shown in our tests.

Below is a summary of the key performance metrics from the various tests.

Scenario CPU Average Peak MEM Usage IOPS Average
No Bromium 20% 5.1 GB RAM 21 IOPS
Bromium 3.2 33% 7.1 GB RAM 24 IOPS
Bromium 3.2 U5 22% 5.9 GB RAM 21 IOPS
Bromium 3.2 U5 + TP 13% 5.4 GB RAM 19 IOPS

First, take note how a default deployment of 3.2 U5 has dramatically reduced both the CPU and memory consumption vs the original 3.2 release. Even with a default install, 3.2 U5 resource usage is not much more than native non-Bromium resource usage when the system in under load.

Now take a closer look at the resource usage of Bromium 3.2 U5 with a tracking protection policy vs. the scenario without Bromium…

Scenario CPU Average Peak MEM Usage IOPS Average
No Bromium 20% 5.1 GB RAM 21 IOPS
Bromium 3.2 U5 + TP 13% 5.4 GB RAM 19 IOPS

Incredibly, the Bromium protected system is using 35% less CPU than the non-Bromium system and also using 10% less IOPS!!!

It really is quite amazing when you think about it.

We can run numerous websites and untrusted documents inside multiple micro-VMs with less CPU and IOPS than would be consumed natively! We still require approximately 300 MB of additional memory compared to the non-Bromium test at peak load, but that is a very small amount of memory when considering we had as many as 9 micro-VMs open concurrently during the workflow (7 websites & 2 Office documents).

In addition to the incredible VM based performance enhancements we have recently introduced, this also shows the power of tracking protection and how it can significantly reduce resource consumption. I know you might be asking, “What is tracking protection?” Well, tracking protection is a browser feature native to both Internet Explorer and Firefox that provides built-in ad blocking capabilities. Stay tuned for a future article specific to tracking protection and the benefits of ad blocking.

So how do we do it? How does Bromium run so many VMs concurrently on a system while consuming little more resources than would be required natively? Stay tuned, as that too will be one of my future blogs!!!

In the meantime please check out the following video on our YouTube channel created by Andy Winiarski and me. We figured that you would find it boring to watch a 40+ minute test workflow, so we created a 5 minute test that shows the performance and resource usage of a Bromium vs. non-Bromium system. Check it out and let us know what you think!

Previous Article

The Antivirus Dead Canary Sketch

Next Article

Virtualization Based Security is Here – Prepare for Liftoff!

You may also like

Leave a Reply

Your email address will not be published. Required fields are marked *