The Absolute Impossibility of White-listing

Author: No Comments Share:

I understand whitelisting at a visceral level: I grew up in a society that tried to implement it for humans.  South Africa under apartheid was consumed with the task of classifying people based on their race and then granting them (“white”) or blocking (“non-white”) access to resources of value (land, jobs, education and civil rights).  The system failed because it was evil and indefensible, but it was also extremely expensive to implement in practice: It required a reliable ability to classify.   Who was really “white”? I remember the horror that greeted research findings that pointed to substantial intermingling between the oldest and most powerful “white” families and native “black” Africans.  This necessitated the creation of additional categories: There were “Colored” (mixed race) and “Indian” (immigrants from India or other non-African people of non-European origin) and others.   I suspect you are beginning to see the utterly ridiculous nature of this approach, but a final example should do the trick.   South Africa loves cricket and rugby – and has long competed on the international scene.  But how should the Apartheid rulers classify visiting players from India, Pakistan, New Zealand, Australia or the West Indies?  It couldn’t banish them to townships every night, as it did South African “non-whites”, so it granted them the status of “honorary whites” for the duration of their stay.

Let’s get back to IT.  Application Whitelisting (Gartner calls it Application Control –Neil McDonald provides a great overview of vendors) aims to stop any code that is not white-listed from running on an endpoint.   I will dig into the tech a bit later, but first let’s acknowledge that, like apartheid, it is founded on a need to centrally control and classify applications that are known-good.   Is this possible?  Perhaps:  There may be thousands of apps within an enterprise, but even with all of their versions they ought to be enumerable and sign-able.   But are they actually known-good, or known non-malicious, or even known-safe?  That’s a much harder problem – in fact it’s the same problem that the AV vendors face in trying to decide if code is malicious.   Ultimately it is un-decidable – for the same reasons – and the proof reaches as far back as Gödel.

  • Every program has bugs, and many of these are exploited by malware.  It’s currently fashionable to pound on Java for its recent vulnerabilities, and many are calling for its removal from the client.   But the JVM isn’t malware, it’s just … shoddy software – like all the other software you use.    I can easily white-list a massively vulnerable JVM.  Would that help to protect me?  No, a binary blob of java that arrives at the client from a web page will run, cannot be white-listed, and can trivially compromise the client, including connecting a C&C site to other computers inside the enterprise to permit exfiltration.
  • There are many examples of so-called “easter eggs” –embedded functionality or even complete programs hidden in traditional software packages.  For example the complete FlightSimulator embedded in Excel ’97, “Elvis is not dead” in Lotus Notes 4.0, and many more.     A malicious easter egg is obviously easy to create – and impossible to detect.
  • And of course if I were to target your organization, do you think I’d send you an empty email with an attachment titled “virus.exe”?  No, I’d attempt to subvert your mechanisms of trust, delivering malware that is white-listed, at which point all bets are off.
  • The absolute failure of the idea of whitelisting is the need to deal with “the outside world”.  Every URL points to a different program, perhaps a different program every time it is invoked.  Site reputation (white-listing) is useless: Reputable sites are used more often and more successfully to distribute malware.

Ultimately white-listing is no different from or better than black-listing because it is impossible for either humans or computer systems to distinguish good software from bad software.

We at Bromium believe that computer systems can only be relied on to ruthlessly implement only simple boundaries of trust – holding the line on the principle of least privilege.   This is possible using micro-virtualization and hardware backed isolation, in concert with attested boot.  In other words, the future of security will be founded on hardware protection, eliminating the need to try to decide the un-decidable.

Previous Article

Cyber-rattling is a Convenient Excuse for Security Vendor Failures

Next Article

The Friction Affliction

You may also like

Leave a Reply

Your email address will not be published. Required fields are marked *