It’s disappointing to see Business Week and others fault Target IT for the awful compromise it suffered last year, not the least because of the large number of pending lawsuits. It’s even more disappointing to see the security industry react with something just short of glee. After all, “If it could happen to Target, you could be next!“.
Whilst serious mistakes were made at Target – both architecturally and procedurally – it is important to ask how a company with an established security practice, whose systems met regulatory standards, and that had just spent $1.6M on state-of-the-art network security appliances (that actually issued alerts for the malware used in the attack) – could be tipped over by relatively unsophisticated attackers.
Blaming Target is a bit like blaming you for getting the flu, even though you’ve had a flu shot.
It’s important to hold the vendor ecosystem accountable for its share of the failures. Instead of “How Target Blew It” let’s try “How the security vendors blew it” by not stopping the attack. After all, that is what they promise to do.
The problem is this: Today’s detection-centric products – whether on the network or an end point, can’t help. At best they bleat whenever they see something suspicious or that’s known to be bad, but that’s useless:
- Security teams spend a lot of time chasing down alerts for ineffectual attacks – those for which an alarm is issued, but that wouldn’t execute given the current patch level of the end point. There is a vast amount of known-bad out there. Microsoft reported for 2012 & 2013 about 18% of end points (running Microsoft end point protection, so this statistic is probably consumer-biased) encountered malware of some sort, per quarter. In a Target-sized enterprise with 100,000 PCs, as many as 18,000 might encounter something scary each quarter. Even if IT detects all of these events, each one represents a potential attack that requires human investigation.
Even if IT can keep up, it is difficult to know if malware actually executed, or how bad the attack might be, so there’s no choice but to do open heart surgery on the end point, and (if it’s a desktop) the user spends the afternoon drinking tea instead of doing useful work. Ultimately, the security team wastes time that could have been better spent on actual threats – those to which the enterprise is actually vulnerable.
- The Target security team appears to have been worried about this and about the consequences of false alerts because they turned off the “automatic quarantine” feature of their network IDS, and ignored early evidence that an attack was under way. A mistake for sure, but my guess is that we can’t blame them – if they had to sideline a user or close a store every time an alert was triggered nobody would get any work done and the business would grind to a halt.
- When a genuine, new attack occurs to which the enterprise is vulnerable, it may well go undetected because.. detection can never be perfect. But if it is a real attack and is detected, it can easily become lost in the sea of alerts. Perhaps this is what happened at Target. Looking again at the Microsoft data, although 18% of PCs encounter bad stuff each quarter, remediation was needed on less than 1%. In other words, detection-centric security tools probably generate 10x more alarms than they should.
Wouldn’t you rather know about actual attacks and vulnerabilities first?
I first started to think about this when interviewing sales folk from network security vendors. Time after time I heard this: “All you have to do is plug in the device on a SPAN port and watch the bad stuff coming in, then hand over a purchase order for the customer to sign.”
It is crucial that we consider the entire security life-cycle cost, and not “react” out of fear when a vendor presents a scary looking list of red alerts. Their tools may actually increase your costs and increase your risk of compromise by wasting valuable SOC resources on false alarms. The state of the art in network security will generate lots of alarms for known-bad. It might even alert you about “suspicious” traffic. But alarms do not equate to security, and sophisticated malware will still breeze on through. And, remember that all bets are off if your users are mobile, because they cannot be protected by your network perimeter.
There is a vastly better way forward
There is lots of malware, and apparently it’s getting worse. Patching can never keep up. Nor can detection. Or humans for that matter. We need to block attacks automatically – without signatures, and without doubt. We need to eliminate false alarms and deliver accurate intelligence for attacks that would actually execute on the end point. And we need to do this while empowering users to drive the business forward. The Bromium architecture offers the first ever approach that turns the received wisdom of the security industry on its head:
- Malware that hits an end point (the user clicks on a bad site, doc, or file) can at worst compromise a micro-VM, without affecting the end point itself. The user is maximally empowered, fully protected, and need not even be aware of the attack.
- If the attack does execute, that’s because the OS and its apps are actually vulnerable to it. That’s useful to know.
- If malware executes, it can do arbitrary damage to the micro-VM that isolates it, but it can’t persist, steal data or access the enterprise network. .
- In contrast to the traditional detection-centric approach that has to try to detect malware before it wreaks havoc, Bromium has the relative luxury of relying on the CPU for hardware isolation. So we can wait till malware provably compromises the micro-VM in which it is isolated before even having to decide if it is bad or not. Meanwhile Bromium LAVA will have quietly run the equivalent of a “task-centric black box flight recorder”, tracking every change made by the malware, capturing its payload C&C network, registry changes, privilege escalations and much more.
- When malware provably compromises a micro-VM running the actual OS and software of the end point it cannot steal any data or access the corporate network. The malware can’t even capture keystrokes or screen-scrape the end point. LAVA issues real-time, accurate STIX threat reports, together with the captured malware, for actual attacks. It gives you everything you need to know: effectively a signature to block malware that will affect specific end points given their current patch level. This is definitely better than having your SOC team scurrying about chasing false alarms.
- Finally, the end point will self-remediate, discarding all changes made by the task, automatically. No need for $500/hr services that are sold knowing that vendors’ products can’t stop the attack.
It’s time to cut the CISO some slack, and to begin to ask security vendors some hard questions: “Why can’t your product safely block only real attacks?” or, “Given that it can’t reliably identify or block actual attacks, why should I buy your product?”