Skip to content
August 14, 2014 / Dan Wolff

The Rise and Fall of Enterprise Security

Every day, enterprises are bombarded by rapidly multiplying and morphing advanced threats—and current network and endpoint security solutions aren’t capable of defeating these targeted attacks. This year a major IT analyst wrote: “Advanced targeted attacks are easily bypassing traditional firewalls and signature-based prevention mechanisms. All organizations should now assume that they are in a state of continuous compromise.”

The fundamental problem with security today is the legacy operating systems and applications we use today were developed with little concern about the potential for introduction of hostile or “untrustworthy” applications or data. Unfortunately these systems have not kept pace with the growth in connectivity, and our computer systems still have no way to decide whether a document or an application is trustworthy or hostile. Malware continues to exploit the interaction between and within the software installed on a system to achieve its goals with little protection provided by the system itself.
To compensate, the entire IT security industry responded by developing new technologies to mitigate the threat of the day, whether its sandboxing, whitelisting, host web filtering or the latest trend in network sandboxing to identify threats already in the network (see chart below). The growth in security spend is up 294% since 2006 to $21B (source Gartner), while the reported data breaches have exploded, where there were 614 reported breaches in North America, disclosing over 91M records.

graph of bubbles

2013 614 reported breaches, 91,982,172 records

IT has had no choice but to assert control over users – and the networks, applications, media, websites, and documents they use. Every day companies deploy a unique mix of endpoint and network technologies that are without fail complex, expensive and many times require adding staff just to run them. This approach is imperfect and will surely fail: productive employees must collaborate and communicate and they often create their own “shadow” infrastructure. When this happens, a single click can lead to the next major cybersecurity breach. It is provably impossible to protect the enterprise against the unknown, undetectable zero-day attack with traditional, legacy cybersecurity tools.

The fact is that users are still getting infected with APTs and other malware, in spite of all of this spending. Looking at the following Virus Bulletin report, you can see how today’s antimalware products get an “F” grade for protection:

vb

https://www.virusbtn.com/vb100/rap-index.xml

…and these are not advanced threats! I talk to many customers who say their overall protection rate is under 50%….meaning over 50% of threats get past their current defenses!

How is this happening? Malware is now designed to evade detection. By leveraging zero day exploits, polymorphism and the rapid evolution of web technology, malware evades “detection” based security solutions and infiltrates the organization by exploiting the inherent trust between operating system components. It may be weeks or months before a successful attack is discovered. Meanwhile valuable information can be stolen or critical infrastructure can be disrupted by the attackers.

Here is a brief overview of key protection technologies and their limitations in dealing with modern attacks.

Intrusion prevention system (IPS)
(IBM, McAfee Network Security Platform, Cisco, et al) Defends networks against known attacks that have signatures by detecting and blocking in the network datastream. Includes some behavioral detection for certain threats. Limitations:

• Can’t block without a signature.
• Needs to be implemented at every ingress/egress access point.
• Costly, complex, and noisy, especially for geographically distributed networks.
• Absolutely no protection for mobile users outside of the network.
• They are mostly signature based, but rely on some behavioral tools.
• Encryption of network traffic stream can essentially blind network IPS.
• Network admins HATE to have more bumps in the line and IPS adds a bump.

Network Sandboxing
(Dhamballa, FireEye, McAfee, et al) Detects infiltrations from targeted attacks, after the attack is in the network. Limitations:

• Does not stop or remediate threats to endpoints.
• Costly and noisy.
• Requires expert-level security personnel constantly monitoring events. (See the Target breach for a prime example)

Web content filtering
(Websense, McAfee, BlueCoat, et al) Blocks access to known malicious websites to protect against web exploits and Trojan attacks. Limitations:

• Only blocks known malicious IP addresses.
• Needs to be implemented at every ingress/egress access point.
• Protection is diminished for mobile users and partners accessing retail network.

NAC
(Forescout, Bradford Networks, Cisco, et all) Ensure only ‘clean’ systems access the network. Quarantine vulnerable systems and enforce network segmentation. Limitations:

• Complex to deploy and manage.
• False quarantines are common and cause major headaches and IT calls.
• Does not deal with remote users.

SIEM
(McAfee, HP, IBM, et al)
Real-time SOC alerting, integrated endpoint intelligence. Limitations:
• Creates copious amounts of data that must be interpreted in to actionable intelligence.

Endpoint Antivirus and other detection-based solutions
(Symantec, McAfee, Kaspersky, Trend Micro, Sophos, et al) Detect known threats on endpoints. Limitations:

• Cannot keep up with the rapid influx of new threats and variants.
• Can’t block without a file signature or behavioral rule.
• Only known threats or behaviors
• Many false positives
• Remediation usually required even if threat is detected
• Limited attack intelligence

Host intrusion prevention systems (HIPS)
(Symantec, McAfee HIPs, et al) Intercepts many zero day attacks in real time by detecting common behaviors. Limitations:

• Has a chance to catch a zero day attack, but can still miss many advanced threats
• High operations overhead to configure and maintain.

Hardware enhanced detection (McAfee Deep Defender) Loads as a boot driver and looks for rootkit behaviors before the OS loads. Limitations:

• Only detects/blocks some kernel mode rootkits. Does not block user mode rootkits.
• Consumes ~10% of CPU cycles while providing limited protection.

Application whitelisting
(Bit9, McAfee Application Control) Controls which applications are allowed to install and run on an endpoint by matching authorized programs (the whitelist) to a database of “good” applications. Can be an effective way to block execution of malicious executables. Limitations:

• Blocks users from downloading and using new tools and programs without IT involvement.
• Not integrated with other security tools, is hard to manage and requires business process changes. Also requires a large database of known good applications.
• Successful on servers, which don’t change often, but is largely unusable on end-user systems.
Software Sandboxing
(Invincea (Dell Protected Workspace), Sandboxie, Trustware)
Creates a “sandbox” environment within the Windows OS to analyze execution of untrusted applications. Restricts the memory and file system resources of the untrusted application by intercepting system calls that could lead to access to sensitive areas of the system being protected. Limitations:

• Advanced malware can bypass any sandbox to take advantage of kernel mode vulnerabilities.
• User-mode malware can escape from any sandbox, permitting it to elevate its privileges and disable or bypass other forms of endpoint protection and compromise endpoints, including data theft.
• Changes the user experience, causing support calls and training requirements.

Hardware enabled isolation via micro VM
(Bromium) Isolates every user task in a hardware-based micro-virtual machine (micro-VM). Limitations:

• No known limitations in defeating zero day kernel exploits
I should also mention: End-users have emerged as the weak link in enterprise security. With the proliferation of web, email and social communication, users are one click away from compromising their desktop. Mobile laptop users are further exposed as they have limited protection from the corporate network based security mechanisms. Current defenses can be cumbersome to use and manage. All too frequently employees are given admin rights to enable their free use of any software. ..unfortunately this also gives attackers a leg up when going after critical information like credit card numbers and intellectual property.
There is a better way forward

Patching can never keep up. Nor can detection. Or humans for that matter. The Bromium architecture offers the first ever approach that turns the received wisdom of the security industry on its head: Bromium vSentry® uses proprietary micro-virtualization technology to isolate content delivered via Internet browsers, documents, email, and more. Malware that may enter the Bromium Micro-VM® through vulnerable applications or malicious websites is unable to steal data or access either the protected system or the corporate network and is automatically discarded when the web session or document is closed by the user.

Task-level isolation means you can ignore browser vulnerabilities

Bromium vSentry automatically and instantly isolates vulnerable user-initiated tasks, such as opening an unknown web page in a new browser tab or an email attachment from an unknown sender. It can create hundreds of micro-VMs dynamically, in real time, on an endpoint. Users are not prompted to “allow” or “deny” actions and can focus on getting the most from their system without worrying about threats. The end point will self-remediate, discarding all changes made by the task, automatically. No need to rush out untested patched, impractical browser usage policies or new technologies that are known to be vulnerable. In short, you can relax knowing that any threats are isolated.

Its time to stop the merry-go-round and head scratching and gain control of your infrastructure.

To learn more about Bromium’s game-changing security architecture, please visit http://www.bromium.com.

July 28, 2014 / Simon Crosby

In praise of seamless, small-footprint, light-weight, transparent endpoint security

In a recent blog, Rick Holland of Forrester Research takes aim at meaningless vendor epithets, such as “light-weight”, “non-invasive” and “small-footprint” used to describe their endpoint security products.  As he astutely observes, what vendor would claim otherwise?

As a recovering endpoint security administrator and against a backdrop of failed technologies like HIPS, Holland points out (and I’m sure desktop IT Pros would agree) that empowering the user is always #1 on the CIO’s list of priorities, and a solution that reduces a user’s PC to a crawl, or increases calls to the help desk, quickly negates any of its security benefits.

Holland reiterates key requirements for new technologies: (I’ve abbreviated and <summarized> them; here’s the original):

  1. “New endpoint solutions must show that they can be effective and transparent to users.”
  2. “The administrator’s experience of the solution is also important. UX enhance effectiveness. Scalability is another key consideration>..”
  3. “Some solutions focus on prevention (e.g. Bromium..) … But remember .. they must UX and empower the administrator>. Prevention is ideal, but assuming that adversaries will circumvent your controls, visibility is also important..”
  4. “Just because a solution says it can stop zero days, it doesn’t mean you’re safe. The adversary might target the solution itself … Remember, if it runs code, it can be exploited.”

He’s right of course – and every vendor knows it.   And his arguments identify a critical need – a set of empirical metrics that can help customers trade off cost, user empowerment, security and administrative scalability.

The only metrics available today are (useless) AV rankings based on the percentage of known-bad attacks they detect. (Any product that doesn’t detect 100% of known-bad should be free).  There is no way to gauge security of systems against unknown attacks.  There are also no consistently applied measures for UX or administrative scalability.  This makes it difficult to compare AV to new endpoint protection solutions, and almost impossible to trade them off against Endpoint Visibility and Control (EVC) products that really don’t secure the enterprise. Some reasons why:

  • Whereas AV saps performance from the whole system, micro-virtualization, for example, imposes no overhead on IT supplied apps but instead taxes each untrusted web site or document with the “imperceptible” (my epithet, from real customer feedback) overhead of a new micro-VM – about 20ms. How do we “measure” UX in such an environment?
  • If a sandbox is built into an application (eg: Adobe Reader), is the overhead accounted to the app, or to security, and how will we measure that? How do we measure user empowerment in a world of white-listing?  If the app is installed in a security sandbox that gives visibility but doesn’t really secure the endpoint, is this more valuable?
  • When we add EVC products to the mix, it gets harder: It’s easy for any product to deliver an unchanged UX if it doesn’t actually protect the endpoint. But what’s the point of an endpoint security solution that … isn’t?  Can tools that don’t protect the endpoint be compared to solutions that do? (in my view, no.) Is EVC a glorified “Enterprise Breach Detection”, simply measuring the time from compromise to detection? How do we compare that to endpoint protection mechanisms that defeat the attack?
  • Ultimately, EVC tools get an easy ride because they don’t have to protect the endpoint, yet they increase cost and complexity – they need vast databases that are expensive to acquire and run, and they don’t reduce the workload on IT staff who still have to flatten compromised endpoints and reinstall Windows while users strut about frustrated and unproductive.
  • What of user experience? Unlike the world of VDI where a benchmark performance metric such as LoginVSI can be applied consistently across vendor products, in endpoint protection no consistent metrics are available. At Bromium we are adapting LoginVSI to permit us to provide consistent metrics for UX across both native and virtual desktops.
  • How much security is enough? Even the most secure endpoint security solution can be compromised, but is there any evidence of successful attacks in the wild?  Is there evidence that pen-tests against a solution have been successful?  It is a slippery slope to argue theoretically that every security mechanism can be compromised, and that therefore detecting a breach is all that matters?

Ultimately I believe we need to assess the cost, per user, per year, to deliver a secure, productive endpoint. We should include the cost of IT personnel to deploy and manage the desktop, apps and endpoint security tools, and to remediate when an attacker succeeds. We should include the cost of user-downtime during remediation and the cost of all network appliances, servers and databases.  We need to measure  UX in a consistent way, with real workloads, and get real user input.

Ideally our criteria should allow us to trade off architectures.  For example: We could ask whether users would be more productive and secure with a better PC and robust endpoint security that protects them no matter what they click on, or on a cheaper device with an EVC solution that doesn’t stop attacks, and that requires remediation whenever it is attacked.  Ultimately, I believe, the criteria should also allow us to also account for the millions of dollars spent on proxies, network IDSs, storage and servers that play a role in endpoint security, and question their utility in the face of new endpoint solutions.

In summary, Rick has done us a favor by  calling out the vendor ecosystem for its use of meaningless epithets, I am optimistic the security industry can become more thoughtful to engage in meaningful discussion.  I fear, however, that we will have no choice but to continue to use them until there is a decent way to empirically measure our claims. I welcome the opportunity to work together to develop a robust set of metrics that will cut through the nonsense of vendor marketing – I have many more thoughts on the topic.

July 16, 2014 / Simon Crosby

Microvisor + Hypervisor Makes Your VMs Secure by Design

I often get asked whether micro-virtualization can be used with a traditional hypervisor and full-OS “fat” VMs (humor: FAT VMs are another matter).

YES! There are powerful benefits in both client and server scenarios. I’ll focus on the user centric uses that we currently support in vSentry:

  • VDI and DaaS: MakeVDI/DaaS secure without legacy security software that kills scalability andUX.
    • The Microvisor runs nestedin a VDI/DaaS desktop VM, running on top of a “root” hypervisor that virtualizes Intel VT/AMD-V. We’ve optimized micro-Xen to run nested on VMware ESX with virtual hardware 9/10 – the most widely deployed virtual infrastructure for VDI/DaaS under both Citrix XenDesktop and VMware View
    • None of XenServer, Hyper-V (WS12 & Azure), RHEL/KVM, or AWS supports nesting today, though the up-stream work in Xen is done. Props to Canonical for their nesting support in KVM/Ubuntu. Today all nesting is software-based. Hardware nesting via Intel VMCS-Shadow will start to arrive in server CPUs soon.
  • Client-hosted Virtual Desktops: Secure your BYO devices, developer desktops and Windows virtual desktops on a Mac or PC:
    • The Microvisor runs nested within a client-hosted Windows desktop VM (as for VDI/DaaS) on VMware Workstation or (on a Mac) VMware Fusion
    • Alternatively the Microvisor can run side-by-side with VMware Workstation/Fusion, effectively sharing VT/AMD-V. This allows us to secure a user desktop (Windows/OS-X) from attack – so that it can securely host an enterprise delivered VM. For this case we have two goals:
      1. Secure the host OS using micro-virtualization
      2. Also host additional full-OS VM(s) for desktop virtualization or test/dev. (with the option of protecting them too, using micro-virtualization)

This raises a killer question: Could a single hypervisor run OS VMs and micro-VMs?  YES!   Micro-Xen does this today (though not as a supported feature yet).

Fortunately (as a result of our collaboration with Microsoft starting in 2006, at XenSource), micro-Xen can run Windows VMs saved from Hyper-V in VHD format. I use this to demo the Bromium Management Server (BMS) in a WS12/SQL VM on my vSentry protected laptop.  If you’d like a detailed technical description of how this works, let me know.

 

July 15, 2014 / Simon Crosby

How do you spell “Polymorphic”?

I guess the answer is “i r o n y”:  Last week a Bromium field employee searched for “polymorphic” on dictionary.com and was treated to a gloriously literal definition: The site dropped a banking Trojan!

dict1
Although the user was unaware of the attack and continued working,  vSentry automatically isolated the attack, erased the malware and alerted Bromium HQ.  The report provided, in real-time, a detailed forensic trace of the malware as it executed, together with an encrypted manifest containing the malware itself.   This allowed the Bromium Labs team to immediately see what had happened.  The LAVA trace is shown below, as it “popped”:

dict2

The attack is incredibly noisy – reaching out to scores of C&C sites and DNS servers.   If we turn off visualization of the network traffic and use the tools in LAVA to identify malicious activity, we can immediately zoom in on the crux of the attack, which is pictured below.   The site invokes Java, injects shellcode, and downloads, drops and executes OBPUPDAT.EXE, whose MD5 hash is shown on the screenshot.   The attack also modifies 35 Registry settings to persist, sets a new browser proxy, and starts a process to capture keystrokes.

dict3

The attack is a variation on previously delivered banking trojans.  OBPUPDAT.EXE steals user account details and other information delivered to the browser, and captures user passwords.  It can also download malicious software and allow remote access to the compromised device.

The attack was delivered by dictionary.com on July 7th. The first AV vendor fix emerged on July 9th, but we don’t know how long the attack existed in the wild.     Virustotal has vendor signatures and analysis.

 

July 10, 2014 / clintonkarr

Detectible Dysfunction

In 2003, security industry analyst Richard Stiennon famously declared that intrusion detection systems would be obsolete by 2005, writing at the time:

“The underlying problem with IDS is that enterprises are investing in technology to detect intrusions on a network. This implies they are doing something wrong and letting those attacks in.”

To some extent, Stiennon was right, intrusion detection systems have become obsolete, yet his comment still remain relevant today. The NIST Cybersecurity Framework, published in October 2013, organizes five basic cybersecurity functions: identify, protect, detect, respond and recover. Three-fifths of this framework (detect, respond and recover) assume compromise will occur.

For the past ten years, threat detection has been a Band-Aid on a bullet wound. The good news is that the industry is finally starting to come around to this realization. Symantec has acknowledged that anti-virus is dead, detecting just 45 percent of cyber-attacks. The Target data breach serves as a cautionary tale since its threat detection systems alerted response teams that failed to prevent the breach.

Error

What is the problem? Why is it so hard to make threat detection solutions work effectively? It turns out, there are a few reasons:

  1. Performance vs. security – Threat detection systems rely on signatures to catch cyber-attacks, but the more signatures an organization has enabled, the more performance takes a hit. Organizations face a dilemma, balancing performance and security, which typically results in partial coverage as some signatures are disabled to maintain performance.
  2. Management is time-consuming – The process of tuning signatures for threat detection solutions is labor-intensive and ongoing because new signatures are released all the time. If organizations don’t take the time to tune signatures, they generate more false positives, which creates a signal-to-noise ratio that results in real threats being overlooked.
  3. Management is error-prone – Once signatures create too much of a performance impact or the volume of false positives becomes too great, organizations tend to deploy threat detection systems in “alert only” mode. The issue with “alert only” threat detection is that it requires security response team to remain diligent, which the Target breach has demonstrated is virtually impossible.

Ten years later, Richard Stiennon is right, threat detection is obsolete, which is exactly why organizations are doing something wrong. Instead of focusing on detecting the attacks that get through, organizations need to focus on protection.

July 8, 2014 / Simon Crosby

If you had only one more security dollar…

tn-600_Mon_AUS-GE_Money_001

what would you spend it on?   Improve endpoint security, or better protect your network or your applications?

This was the topic debated by three Gartner security analysts: Neil MacDonald (endpoint), Greg Young (network) and Joseph Feiman (application) at #GartnerSEC in DC, in June.

Watching Gartner analysts debate each other is fun – much more fun than watching them pontificate.  They live and die by their cred, so the gloves came off pretty early and they landed heavy blows on all three categories:

  • In spite of the promises of network security vendors it seems pretty easy for malware writers to bypass the state of the art network protection; Rapid growth in encrypted traffic will increasingly leave network security blind; High false positive ratios leave network security teams with floods of red-alerts; and even if an attack is detected, IT still has to remediate the endpoint.  Finally, both “cloud” and “mobility” make the enterprise network less relevant in both detection and attack prevention.
  • Application security is a pipe dream.   It’s been “almost ready” for ages, but it never seems to come closer to reality.  Reason: the complexity of modelling applications in a way that is semantically useful for security.  Moreover, the adoption of cloud and SaaS makes instrumentation of apps even less likely.
  • The endpoint is an unmitigated disaster with failed AV technologies and untrainable users who click on bad things. BYOD, mobility, PC/Mac… all make it worse.

Each analyst did his best to defend his turf too:

  • More hardware ought to solve the network crypto problem (my view: if at all feasible this will beexpensive); Better instrumentation and big-data analysis will help to reduce the challenge of picking out the needle from the haystack.  And, mobile users need to be forced onto the VPN.
  • New endpoint technologies, including isolation of untrusted execution, can transform the trustworthiness of the endpoint – which is responsible for >70% of enterprise breaches.   Alternatively, new approaches to endpoint detection (eg: searching for IOCs) can help to identify compromised systems quicker.
  • Application security could be “a big win”.   A practical approach is to dis-aggregate apps into multiple services in VMs, and to instrument each VM container to look for application-layer security anomalies.

But what of the original question – where can a CISO get the most value for her additional security dollar?

To my mind the answer is easy (if predictable): Micro-virtualization is a single solution that simultaneously addresses the biggest challenges in each of network, endpoint and app security:

  1. Micro-virtualization secures the endpoint – the source of > 70% of enterprise breaches – enabling it to protect itself by design from attacks that originate from the network or untrustworthy attachments or files on removable storage. It also automatically remediates malware.
  2. Micro-virtualization secures the enterprise network from end-point originated attacks. Malware that executes in a hardware-isolated micro-VM cannot access the enterprise network or any  high-value SaaS sites.   Malware can never use a client device to probe the enterprise network.
  3. Micro-virtualization secures vulnerable client applications and web-apps delivered to end users.   Each site or app is independently isolated, with no access to valuable data or networks – protecting the app from an attacked enterprise device/user, preventing credential theft and session hijacking.  It can also enforce key policies including use of crypto, restricting access to networks/sites, and enforcing DLP.

Micro-virtualization delivers the greatest security bang for the buck because this single solution solves the endpoint, network and application security problems for > 70% of enterprise breaches.

Add to this the fact that a micro-virtualized endpoint never needs remediation, protects itself even when using un-patched third party software, and renders a vast swath of kernel zero-day vulnerabilities irrelevant.

Finally, recognize that micro-virtualization empowers users to be productive anywhere, to click on anything, on any network, and – if the endpoint is attacked – it delivers precise, detailed forensic insights, in real time, without false alarms.

A dollar spent on micro-virtualization massively reduces the workload on the security team while making it better informed and strategically aligned with the objectives of the business.  It’s a no-brainer.

July 1, 2014 / Bill Gardner

The Dawn Of A New Era In Corporate Cyber Threats?

 

Sunrise

Cyber criminals know where the money is and have been attacking businesses in the hopes of getting a big payout for many years. Hacking and manipulating financial systems to steal money or customer credit and banking information to sell on the black market or stealing trade secrets to sell has been the traditional stock in trade of the black hat community. Successful attacks have been very costly to businesses and can run into the hundreds of millions of dollars for a large breach like the one suffered by Target in late 2013.

While a successful cyber attack can be costly, companies have been able to continue operations after a major breach. Despite additional investments in traditional security technologies the costs and frequency of successful attacks continues to rise. Many larger businesses have tried to offset this trend by investing in insurance coverage to help cover the costs of a successful cyber attack and reduce their overall risk. But this approach only makes sense if the business is able to continue to operate after the attack.

What if the hackers that attacked Target or E-bay managed to destroy the data they were able to access rather than just stealing the data while leaving it intact? What if a health care provider was to permanently lose all of their patient records, billing records and payroll records? How about the law firm the suddenly finds all of the client records have disappeared never to be recovered or the bank that no longer has any record of customer deposits? Would any organization survive the loss of such critical information? Would their disaster recovery and backup procedures protect them and insure the continuity of the business? Disturbingly the answer today is clearly “maybe” rather than “of course”.

For a hi-tech software hosting company by the name of Code Spaces the unthinkable has happened. Hackers penetrated their systems recently and rather than stealing information they demanded payment in exchange for not destroying their data. Code Spaces personnel attempted to determine the validity and extent of the compromise. The attackers detected these attempts and deleted the vast majority of their data as well as their backups and mirror sites. Management at Code Spaces announced that due to the scale of the loss and damage they had no choice but to cease operations and close their doors.

While this might be an isolated incident my instincts tell me that this is a watershed moment in the war between the criminals and the legitimate business community. Once the cyber criminal community at large realizes the power they can now wield there is no turning back. And can any business with a fiduciary responsibility to their stake holders take the chance that a cyber extortionist might follow through on their threats and destroy a company beyond recovery? Only time will tell.

Follow

Get every new post delivered to your Inbox.

Join 16,610 other followers