We are engaged on a Quest for trustworthy computing. From the Trojan War we have learned that a trustworthy device must protect itself (and all of my interests – personal and corporate) at all times, even though it is vulnerable and must enter domains of unfathomable trust.
If, like me, you grew up translating the classics you will recognize my title from Virgil’s Aeneid: “I fear the Greeks even when they bear gifts”. The gift, of course, was the original Trojan Horse. The suspicious chap who uttered these words was Laocoön, high priest of Troy. As he finished a serpent emerged from the sea and dragged him and his sons to their deaths (it’s tough if the gods are against you).
Today’s world is not much different from ancient Greece: Can you tell the difference between an email from an attacker and your boss? Can your AV vendor decide if the attachment contains an attack?
Symantec reported generating over 10M new attack signatures in 2010, and McAfee has reported that it now sees about 75,000 unique variants of malware per day, a number that is growing multiplicatively. Still more alarming is the fact that a large majority of companies have no security practice whatsoever. Only large enterprises and endpoint security vendors have high priests.
- Moore’s Law and “the cloud” have delivered vastly more CPU to the attacker to mutate attacks than to the detection capabilities on the endpoint, relative to the dominant constant – the latency for delivery of new signatures.
- Putting “security in the cloud” gives security vendors benefits of scale and visibility, but there’s no way they can succeed in a world of polymorphic malware. I quote from an excellent paper: “The challenge of signature–based detection is to model a space on the order of O(2^(8n)) signatures to catch attacks hidden by polymorphism. To cover n=30 byte decoders requires O(2^240) potential signatures; for comparison there exist an estimated 2^80 atoms in the universe.”
- We knew this already: The Halting Problem restated says no program can reliably tell if another program is good or bad.
- Given the above, use of heuristics for anomaly detection seems attractive, but the challenge in detector design is to balance the rates of False Positives and False Negatives. False Positives scream “watch out!” more than is necessary, frustrating users and administrators and training them to ignore real threats. False Negatives occur when the detector gets it wrong (it will), letting the bad guys in. Understanding the subtleties of Receiver Operating Characteristics to tune a detector is the domain of high priests, like Laocoön.
- 5. Ultimately, they will get it wrong. No detector is perfect, and any mechanism that relies on detection in order to block an attack will fail.
I recently met the CISO of a major international security organization. He told me they now require “two of everything, from different vendors”. Two firewalls, two web gateways… You get it: double the cost. He threw his hands in the air when I asked “Why two? How much more secure are you?” Would two high priests have saved Troy?
We have lost the battle to deliver resilient infrastructure. The venerable “.dat” file is over 100MB in size, and your PCs are reduced to a crawl processing useless signatures. Smart users figure out how to disable end point protection, and the rest curse you for the unproductive desktop you make them use, before dragging enterprise data onto a personal machine to get their work done.
We’ve learned what we can from Troy. On – toward our destination – Byzantium!