In a recent blog, Rick Holland of Forrester Research takes aim at meaningless vendor epithets, such as “light-weight”, “non-invasive” and “small-footprint” used to describe their endpoint security products. As he astutely observes, what vendor would claim otherwise?
As a recovering endpoint security administrator and against a backdrop of failed technologies like HIPS, Holland points out (and I’m sure desktop IT Pros would agree) that empowering the user is always #1 on the CIO’s list of priorities, and a solution that reduces a user’s PC to a crawl, or increases calls to the help desk, quickly negates any of its security benefits.
Holland reiterates key requirements for new technologies: (I’ve abbreviated and <summarized> them; here’s the original):
- “New endpoint solutions must show that they can be effective and transparent to users.”
- “The administrator’s experience of the solution is also important. UX enhance effectiveness. Scalability is another key consideration>..”
- “Some solutions focus on prevention (e.g. Bromium..) … But remember .. they must UX and empower the administrator>. Prevention is ideal, but assuming that adversaries will circumvent your controls, visibility is also important..”
- “Just because a solution says it can stop zero days, it doesn’t mean you’re safe. The adversary might target the solution itself … Remember, if it runs code, it can be exploited.”
He’s right of course – and every vendor knows it. And his arguments identify a critical need – a set of empirical metrics that can help customers trade off cost, user empowerment, security and administrative scalability.
The only metrics available today are (useless) AV rankings based on the percentage of known-bad attacks they detect. (Any product that doesn’t detect 100% of known-bad should be free). There is no way to gauge security of systems against unknown attacks. There are also no consistently applied measures for UX or administrative scalability. This makes it difficult to compare AV to new endpoint protection solutions, and almost impossible to trade them off against Endpoint Visibility and Control (EVC) products that really don’t secure the enterprise. Some reasons why:
- Whereas AV saps performance from the whole system, micro-virtualization, for example, imposes no overhead on IT supplied apps but instead taxes each untrusted web site or document with the “imperceptible” (my epithet, from real customer feedback) overhead of a new micro-VM – about 20ms. How do we “measure” UX in such an environment?
- If a sandbox is built into an application (eg: Adobe Reader), is the overhead accounted to the app, or to security, and how will we measure that? How do we measure user empowerment in a world of white-listing? If the app is installed in a security sandbox that gives visibility but doesn’t really secure the endpoint, is this more valuable?
- When we add EVC products to the mix, it gets harder: It’s easy for any product to deliver an unchanged UX if it doesn’t actually protect the endpoint. But what’s the point of an endpoint security solution that … isn’t? Can tools that don’t protect the endpoint be compared to solutions that do? (in my view, no.) Is EVC a glorified “Enterprise Breach Detection”, simply measuring the time from compromise to detection? How do we compare that to endpoint protection mechanisms that defeat the attack?
- Ultimately, EVC tools get an easy ride because they don’t have to protect the endpoint, yet they increase cost and complexity – they need vast databases that are expensive to acquire and run, and they don’t reduce the workload on IT staff who still have to flatten compromised endpoints and reinstall Windows while users strut about frustrated and unproductive.
- What of user experience? Unlike the world of VDI where a benchmark performance metric such as LoginVSI can be applied consistently across vendor products, in endpoint protection no consistent metrics are available. At Bromium we are adapting LoginVSI to permit us to provide consistent metrics for UX across both native and virtual desktops.
- How much security is enough? Even the most secure endpoint security solution can be compromised, but is there any evidence of successful attacks in the wild? Is there evidence that pen-tests against a solution have been successful? It is a slippery slope to argue theoretically that every security mechanism can be compromised, and that therefore detecting a breach is all that matters?
Ultimately I believe we need to assess the cost, per user, per year, to deliver a secure, productive endpoint. We should include the cost of IT personnel to deploy and manage the desktop, apps and endpoint security tools, and to remediate when an attacker succeeds. We should include the cost of user-downtime during remediation and the cost of all network appliances, servers and databases. We need to measure UX in a consistent way, with real workloads, and get real user input.
Ideally our criteria should allow us to trade off architectures. For example: We could ask whether users would be more productive and secure with a better PC and robust endpoint security that protects them no matter what they click on, or on a cheaper device with an EVC solution that doesn’t stop attacks, and that requires remediation whenever it is attacked. Ultimately, I believe, the criteria should also allow us to also account for the millions of dollars spent on proxies, network IDSs, storage and servers that play a role in endpoint security, and question their utility in the face of new endpoint solutions.
In summary, Rick has done us a favor by calling out the vendor ecosystem for its use of meaningless epithets, I am optimistic the security industry can become more thoughtful to engage in meaningful discussion. I fear, however, that we will have no choice but to continue to use them until there is a decent way to empirically measure our claims. I welcome the opportunity to work together to develop a robust set of metrics that will cut through the nonsense of vendor marketing – I have many more thoughts on the topic.