The year was 1987. Bon Jovi’s Livin’ on a Prayer was blasting through every cassette player. Fred Cohen, the man who arguably coined the phrase “computer virus“ in his paper, Computer Viruses: Theory and Experiments, concluded that “a program that precisely discerns a virus from any other program by examining its appearance is infeasible”, and “precise detection by behavior is also undecidable”.
Yes, in 1987 we knew that detection, whether by signature or behavior was the wrong approach to protecting our information and infrastructure from malware. Yet, in the year 2012 we’re still inexplicably listening to Bon Jovi and relying on detection as a means of malware protection.
Let’s set the wayback machine to a time when “write notches” on the jackets of floppy disks were the ultimate form of virus protection. Nothing could be written to the disk while the notch was uncovered. You, the user, made a conscious decision to allow access to the disk. You were the ultimate gate keeper that actively decided who and what to grant write permission, thereby significantly reducing the likelihood of infection.
This was a sensible approach for the time: A system of relative trust that required you to trust the person who coded or distributed the software, the media itself, and the computing device. In this context, the notion that you could detect malware based on signatures made sense because you believed that the spread of malware would have sufficient trust-based checks along the way to slow the speed of its propagation enough that it could be contained, detected, and removed before it could cause any damage.
This premise was sufficient for the state of technology at the time: First, there weren’t that many places for malware to hide, so it was acceptable to believe that you could detect the delta between good and bad bits. Second, systems weren’t as tightly interconnected as they are today, so the idea of a cure arriving before the malware could strike was a realistic proposition.
The two dominant mechanisms for detection-based malware protection: signatures and heuristics.
A security tool that relies on signatures is only as good as its signature database, which limits protection only to those attacks that have already been detected or that have been successful in the past. Therefore, any attack that takes advantage of a zero-day exploit cannot be blocked by a security method that looks for ghosts of malware past. To further circumvent security tools, malware writers have created a new class of malware with polymorphic (ever-changing) binary signatures, making it much harder to detect.
Conversely, heuristic tools look for behavioral signatures – in other words, is the software naughty or nice? This might be a reliable mechanism if application behavior could be blueprinted, but, as applications become more cloud-y, determining intent through behavior becomes impossible. How can you tell if that IP in the cloud is a necessary resource or a command-and-control server? Further, the behavior of targeted malware awaiting a connection to some sub-system can be undetectably benign prior to striking.
Back in those ol’ codgery days of floppy discs and too much hairspray, when malware moved at the speed of a handshake, and 56k bauds per second was the speed limit, detection-based protection mechanisms were successful at mitigating potential catastrophes like the Michelangelo and OneHalf viruses (the StuxNet and Flame of John McAfee’s halcyon days), so we kept chasing signatures until…
It is said that chivalry met its demise at the end of a cannon. Less poetically: don’t bring a sword to a gunfight. And don’t expect a suit of armor to protect you from a cannonball. These may seem obvious, and yet – as we witness new paradigms in malware design that are utterly invisible to both signature and heuristic-based detection, why do we continue to insist on protecting ourselves with these outdated defenses?
More importantly, the intent of malware design has shifted from tomfoolery and one-upmanship to corporate sabotage and nation-state espionage. Advanced attacks come at us from unexpected vectors and exploit unknown vulnerabilities. Just as the Irianians couldn’t protect themselves against StuxNet because they didn’t know that what it did was possible, our biggest vulnerabilities are our assumptions.
As the technological landscape changes so must our assumptions. Many battle upsets have been decided because one side changed the rules of combat. Airplanes, for example, added an entirely new dimension to combat. Prior to WW1 no one discussed air superiority; however, with the introduction of the airplane air superiority is now a primary strategic concern.
This is not about FUD. The intent is not to monger fear, but to remind you of your humanity. You make decisions in your life based on the relative trust you have for the people, devices, and applications you interact with. You don’t look at fingerprint records before you decide if you’re going to share a piece of information with someone else, you rely on a decision making instinct, a policy you’ve developed for yourself. These same human decision making traits are integral to you, so they should be innate to your computing devices.
The cardinal rule of humanity: adapt, or perish.
It is time not only to update our armaments but to change the very essence of how we compute; to protect our computing environments without being bogged down by outdated detection methods; to isolate and introspect each task uniquely rather than blocking and detecting monolithically. It is time to implement a system of relative trust. That is our approach here at Bromium. That is why we believe what we are doing is disruptive. We want to make computing devices more trustworthy; protecting those devices is a necessary first step in this transformation.