Earlier this month, the Wall Street Journal published a blog, “CIOs Name Their Top 5 Strategic Priorities,” which collected the recommendations from a variety of technical leaders at a CIO Network event. Author Steven Norton notes:
While proposals ran the gamut, consensus seemed to form around two major themes: cybersecurity, and delivering change through effective communication with the rest of the business.
Quickly, here are the top five strategic priorities, according to the report:
- Make security everyone’s business.
- Cyber risk = business risk.
- Be the change agent.
- Have a business-centric vision.
- Anticipate a “cyber 9/11” event.
The full report also highlights “Cybersecurity in the Wake of Sony.” Anecdotally, every CIO in attendance at the CIO Network event – save one – admitted that their organization had been hacked. Cyber attacks have become ubiquitious.
Interestingly, “44 percent of CIOs said their companies now tackle big data projects ‘all the time.’” Of course, as recent Bromium research has indicated, security solutions can often mutate into big data projects, as information security professionals are buried in an avalanche of security alerts.
Back to the topic at hand, the CIO Network event invited two security vendors to speak, who seem to be advocating an approach to information security based on triage, rather than prevention. Of course, this should come as no surprise, considering the products that they sell are unable of completely preventing attacks.
The suggestion that organizations need to be in the position to “detect when something bad happens” is predicated on a broken model of information security. A recent Ponemon report determined that only four percent of security alerts are investigated.
This represents a huge security gap. What is the value of a security alert if information security professionals are not taking the time to investigate and respond to them? Just ask Target (ironically “protected” by one of the vendors speaking at this event). There is no value!
In many cases, detection-based solutions are trivial to evade and in the other cases, information security professionals will be unable to respond to the alerts.
It is unconscionable that the security industry continues to push a broken model of detection and response. Of course, it is by selling these broken security solutions that these vendors can return to sell their forensic consulting services after a breach.
If President Obama is building an “Internet Cathedral,” then many of these information security vendors are guilty of selling nothing more than cybersecurity “indulgences.” If, as one vendor described it, the Sony hack was truly unlike anything seen in 17 years then why did they also describe the attack as not particularly sophisticated? Does incident response now include paying these vendors to make excuses for your organization, even though their products wouldn’t protect against attack?
Early adopters of information security have already realized that detection and response are failing. Yes, they remain part of the information security model, but a paradigm shift is occurring to implement proactive protection. Protection can be achieved by rethinking some fundamental assumptions about information security. Instead of trying to detect everything that is bad, security should protect everything that is good. Bromium achieves this through micro-virtualization, which isolates all Internet content in secure containers to prevent compromise.
As February comes to a close we have already seen critical patches from Adobe and Microsoft. Even more concerning, Microsoft has not yet patched a recently disclosed Internet Explorer zero-day. For better or worse, Google’s “Project Zero” is putting the pressure on vendors like Microsoft to patch reported vulnerabilities in 90 days before public disclosure, which has been a source of public friction between the companies. With this forced public disclosure, there is now a risk of zero-day attacks stretching into weeks or months.
It is easy to feel sympathy for vendors like Adobe and Microsoft, who serve as a public face for the challenges of patching zero-day vulnerabilities. These organizations work ceaselessly and thanklessly to fix, test and deploy patches for vulnerable applications on an increasingly shortened timeframe. Still it seems that as soon as one vulnerability is patched, another one is reported like a Sisyphean game of Whack-a-Mole.
Of course, it does not feel like a game for information security teams. The stakes are quite real, but present a complicated dilemma. Even the most Draconian IT teams could not suggest prohibiting the use of these vulnerable applications that are the cornerstone of our modern productivity, yet even the most obtuse information security professional realizes the risk they present.
When critical security vulnerabilities exist in nearly every common application, from document readers to Web browsers, then it should come as no surprise that the frequency of cyber attacks seems to be increasing.
Beyond unpatched vulnerabilities, security patches present their own set of problems. It is dangerous to patch without testing because deploying a patch that breaks your systems could do more damage than if you are attacked. Security patches have never existed in a vacuum.
Where does this leave organizations? There is a risk if you don’t have the patch, but there is also a risk that you deploy a patch that you didn’t test. Google’s “Project Zero” is pushing vendors to create patches, but could this pressure create more risk? How can we guarantee Microsoft can fix a bug in 90 days without introducing a new bug or breaking the software?
These conflicting challenges represent a real opportunity for Bromium to take the pressure off of IT teams and software vendors. By leveraging micro-virtualization to isolate vulnerable software, organizations can remain protected while they take the time to test critical patches before they deploy.
At last week’s Cyber Security Summit at Stanford, President Obama sought to reset his administration’s relationship with a tech community alienated by an endless stream of disclosures of the government’s penetration of technology companies to achieve its surveillance goals. He appealed for both sides to unite to build an “Internet Cathedral” that will protect our online society. It’s a nice idea – but who are the priests?
The two sides seem diametrically opposed: The tech sector is annoyed and distrustful, and committed to delivering secure services and products that meet the needs of customers world-wide. The government is necessarily concerned with US-centric notions of security and privacy, and online surveillance is a tool that serves its needs. But if customers suspect that US tech vendors are complicit in US government surveillance it could hit their bottom lines – which represent a substantial component (~8%) of US GDP. And the government isn’t in the driving seat: our online future is clearly in the hands of the tech giants and consumers, not the government.
How can we resolve this? Apple CEO Tim Cook addressed the meeting before the President, making an impassioned commitment on the part of Apple to developing products that protect individuals and their information – rejecting technology that permits a government or commercial entity (a nice dig at Google) to surreptitiously gather data. Cook was the only major tech CEO to attend – Microsoft, Google and Yahoo execs turned down the invitation due to their continued frustration at naive and poorly thought-through moves by the government. Cook stressed that Apple succeeds because it delivers what consumers want – secure devices and services that don’t expose them to unwanted surveillance. His message was doubly impactful: It had the cred of a tech CEO whose company just delivered the biggest ever quarterly earnings of any US company, and the passion of a courageous gay man firmly committed to individual privacy.
Then there was the President. His address started out in a folksy way, with nice compliments to Stanford and innovation in the Bay Area. Clearly aware of the tension between his administration and the tech community, he set out to build a middle ground. The pillar of his address was the idea that we as a society are collectively building an “Internet Cathedral” that must protect our online society and allow us to build robust online institutions. He highlighted the relative infancy of the web – at a mere 28 years – compared to the centuries-old cathedrals in Europe, and reminded us that the latter were enhanced over many generations. Pointing out that the foundations of today’s Internet Cathedral are vulnerable, he called on the tech community to build more secure infrastructure and on educators to train the next generation to be better builders. Technology innovation will enable us to perfect the Internet Cathedral just as flying buttresses, fan vaulted ceilings, and ornate windows did in years past. It’s a seductive analogy.
Obama’s idea that the government and the tech sectors should unite to develop a stronger Internet Cathedral is a fine one, but it is missing (and Obama omitted) any mention of its priests. He signed an executive order to encourage better sharing of threat information between industry and the government, but its implementation will be fraught with issues of trust. If the government sees the tech sector as architects and builders, but reserves for itself the role of appointing the priests, then it is difficult to see how the two sides can agree. And though his appeal is elegant and non-partisan, it is the architects and builders who will decide how the Internet Cathedral is built. At the end of the day, a secure foundation is precisely that – a secure foundation for all Internet users, and not just the US government.
No government should be foolish enough to believe that requiring backdoors in the technology foundations upon which its society and its economy rely will lead to a more secure future for its people. Governments that subvert the tech used by their citizens, or that fail to fully embrace a secure-first approach to technology inevitably leave their citizens less secure.
In January 2015, Bromium conducted a survey of more than 100 information security professionals, focused on the greatest challenges and risks facing their organizations today. The results indicate that end users continue to remain the greatest security risk, thanks to their tendency to click on suspicious and malicious e-mail and URLs.
Bromium published similar research in June 2014, which determined that 72 percent of information security professionals believe end users are their biggest security headache. Today, 79 percent of information security professionals believe that end users are their biggest security headache.
Additionally, the survey highlights the operational challenges information security professionals face as they struggle to manage multiple point solutions, to respond to the security alerts generated by their users on a daily basis, and to detect and remediate compromised endpoints.
It may seem obvious, but nearly 48 percent of information security professionals believe that having to manage multiple point solutions, many of which are redundant, introduces the most cost and complexity into their security. Logically, more solutions cost more money and take more time to manage. Unfortunately, previous Bromium research has demonstrated that deploying multiple solutions—a “defense-in-depth” architecture—may still leave organizations vulnerable to attack if they are based on the same foundation of traditional pattern-matching or detection.
However, one conclusion we may draw from these responses is that information security professionals could reduce the cost and complexity of their information security programs by reducing the number of point solutions they have to manage by considering new ways to automate or eliminate time-consuming processes, such as responding to security alerts, detecting and remediating endpoints, and testing and deploying urgent patches.
In fact, approximately 20 percent of information security professionals believe that responding to security alerts introduces the most cost and complexity into their security program, while an additional 20 percent believe it is detecting and remediating compromised endpoints. The results suggest that reacting to manual processes that emerge from managing detection-based solutions, such as antivirus or intrusion detection, is the source of considerable frustration for a significant number of information security professionals.
Read the full report, “Endpoint Protection: Attitudes and Trends 2015.”
When you walk the floors of industry trade shows and speak with security vendors, one of the most predominant endpoint security myths is “assume you will be compromised.” Of course, this is a fallacy, but as a result of this axiom, the security industry has become obsessed with detection, but at the cost of less protection.
Unfortunately, there are a lot of shortcomings with a security model based on detection. Take the Target data breach, for example. By all accounts, Target had deployed technology that did detect the attacks against it, yet it did nothing to remediate the situation.
The reason this myth persists is because “assume you will be compromised” is a self-fulfilling prophecy. If you believe you will be compromised then you will make investments in detection and remediation, instead of considering more effective forms of endpoint protection. It is a vicious cycle: assume compromise, invest in detection, compromise occurs because of inadequate protection, threats are detected, incorrect beliefs are validated, repeat into the next budget cycle.
As a result, organizations believe that deploying a multitude of security solutions enables “Defense in Depth.” Bromium Labs has taken to calling this “Layers on Layers” because LOL makes hackers “laugh out loud.” It is important to note that each layer has its own set of limitations and if these limitations are shared across layers, then the number of layers doesn’t matter anymore. In the recent example from Bromium Labs, the focus was exploiting the kernel as that was the common weak link across all the widely used endpoint legacy technologies.
Common endpoint security solutions focus on sandboxes, anti virus (AV), host-based intrusion prevention systems (HIPS), exploit mitigation (EMET), and hardware-assisted security (SMEP), yet a single public exploit for a Windows kernel vulnerability bypasses all of these solutions, even if they are stacked one upon another.
This highlights the weakness of a “defense in depth” architecture. The simultaneous deployment of multiple solutions sharing the same weakness is not satisfactory. The issue is far from theoretical. Modern malware (e.g. TDL4) is already using this particular exploit to gain priviledges. Windows kernel vulnerabilities are frequent, and this is not going to change any time soon – we have to live with them and be able to defend against them.
Sophisticated attacks present a significant hurdle for endpoint protection. Sophisticated attacks may incorporate malicious Web sites, email or documents that have been developed to evade detection. Therefore, even diligent security teams may not be alerted to a compromise. This is the shortcoming when you “assume compromise.”
Additionally, emerging technology trends, such as cloud computing and mobile employees are relocating corporate assets beyond the corporate perimeter, increasing the need for effective endpoint protection. When a mobile user connects to an untrusted network, it is imperative that attacks don’t slip through the cracks.
Beyond the sophistication of attacks, there is also a balance between security and operations. Primarily, operations is concerned with ensuring that applications run, while security is concerned with compensating for vulnerable technology. For example, an organization may have developed its own legacy application that uses outdated and unpatched versions of Java to run.
Therefore, an effective endpoint protection solution must be able to securely enable both legacy application and new computing models from sophisticated new attacks without breaking them. Protection is not enough if we are not also maintaining a great user experience.
The reason it seems like endpoint security is a losing battle is because the current security model is broken. For example, the NIST Cybersecurity Framework organizes five basic cybersecurity functions: identify, protect, detect, respond and recover. Three-fifths of this framework (detect, respond and recover) assume compromise will occur. Similarly, industry analysts promote an advanced threat protection model of prevention, detection and remediation.
For the past two decades, threat detection has been a Band-Aid on a bullet wound. The good news is that it seems the security industry is finally starting to realize that reactive solutions, such as anti-virus, are incapable of detecting and protecting against unknown threats. Even Symantec has admitted that anti-virus is dead.
Threat detection systems rely on signatures to catch cyber-attacks, but the more signatures an organization has enabled, the more performance takes a hit. Organizations face a dilemma, balancing performance and security, which typically results in partial coverage as some signatures are disabled to maintain performance.
In order to stay ahead of unknown threats, organizations must adopt an architectural model that is proactive. For example, micro-virtualization delivers hardware-isolation, which isolates users takes from one another, which in turn protects the system from any attempted changes made by malware.
A robust endpoint protection solution should address the hurdles we discussed earlier, securely enabling legacy applications and new technology initiatives from sophisticated new attacks. We can conclude that detection has failed because it is a reactive defense that attackers have learned to evade. Ironically, these reactive defenses, such as signature-based detection, actually require quite a lot of activity with its constant updates and new signatures.
Instead, we should be considering endpoint protection solutions that are passive and proactive. One example is to deploy hardware-isolated micro-virtualization, which provides a secure, isolated container for each task a user performs on an untrusted network or document. Micro-virtualization can protect against known and unknown threats without the need for constant signatures and updates. This approach to containerization on the endpoint also enables superior introspection with real-time threat intelligence, which can provide insight into attempted attacks that can be fed into other security solutions.
Finally, endpoint protection must maintain end-user productivity. It cannot negatively impact performance or the user experience or else users will work to circumvent its protection. Ideally, the best solutions and invisible and autonomous to end users. They are non-intrusive, they do not negatively impact workflows and they avoid frequent updates.
The impact of recent cyber attacks will be felt for years to come, perhaps having risen to a new level of hurt with the Target and Sony attacks. With a Fortune 500 CEO ousted and a Hollywood movie held hostage, cyber-security is on the minds of chief executives and board members as they gather in their first meetings of 2015. How can a massive organization with complex systems and networks prevent itself from becoming the next Target or Sony? Is there any hope?
Yes, there is hope! However, we have to change the economics of cyber attacks.
Cyber-Security is an Economic Game
In The Art of War, Sun Tzu discusses the economic considerations of war, front and center. The business of cyber-security is also an economic game.
Cyber-crime is red-hot because it makes great economic sense to the adversary. The investment of time and money required for cyber criminals to breach a billion dollar organization is infinitesimally small compared to the payoff. A team of two or three hackers working together for a few weeks with a few thousand dollars of black market software is often enough to breach a Fortune 500.
This reality confounds CISOs who already spend tens of millions of dollars every year on IT security. Your IT security investments are not giving you any leverage!
Antiquated Defenses and Vast Attack Surfaces
Current security architectures were designed in a bygone era when there was a useful notion of an internal network inside the corporations’ buildings, and the Internet outside. The firewall was invented to create a narrow isolating choke point between internal networks and the Internet allowing only a few controlled interactions. All was well!
In today’s world of Mobile, Social and Cloud, the situation is quite different. Your systems routinely run computer programs written by persons unknown. While you may not realize it, each Internet web page is a computer program, as is every email attachment, and even web advertisements. Just about any Internet-connected “rectangle” that you see on an electronic screen is a program. All these external programs are potentially malicious, and can compromise you.
A single bug in over eighty million lines of computer software, in Windows or Mac OS, or in any app, e.g., Office, Java, Adobe, combined with an inevitable mis-click by an unsuspecting employee can compromise your enterprise. You have a massive attack surface, literally countless places for the bad guys to get in! The endpoint is your unguarded front door, where you are being attacked continuously as your employees click away in offices, homes, coffee shops, and hotel rooms.
The endpoint is the weakest economic link in your defenses. Once an endpoint is compromised, the adversary can remotely control the infected computer with the same privileges on your network as one of your legitimate users.
Backfire from Next-Gen Security Investments
Let’s consider the economics of the next-generation firewall. First, the next-gen firewall does absolutely nothing for your riskiest mobile users. Moreover, modern malware tries hard to avoid misbehaving while it is still within your network pipes before reaching an endpoint. The firewall, grasping at straws, generates a large daily stream of seemingly suspicious events. These notifications have to be analyzed and chased down by additional investments in event management systems, and security analysts. The overwhelming majority of these events turn out to be false positives, i.e., wasted money.
The bad guys also use this as a weapon, by cranking up the volume of spurious traffic known to generate false positives, while the real attack is carried out elsewhere. This is reverse leverage.
Ultimately, the next-gen firewall becomes a bottleneck, a choke point, unable to keep up with your growing traffic. You have to spend more money on additional hardware that generates even more false-positive events. Vicious cycle.
A New Hope
There is hope. Innovation will resolve this crisis.
You cannot afford to keep doing more of what you have done in the past, or more incremental versions of this stuff. You have to look beyond Security 1.0. In order to level the playing field, organizations must invest in a strategy that will directly impact the economic costs to malicious actors.
Close your eyes and visualize a heat map of risk for your enterprise. In this picture, every one of your endpoints, enterprise owned or employee owned, client or server, on-premise or cloud hosted, is a little red dot. The size and color intensity of the dot is proportional to the amount of information on the endpoint, and the nature and frequency of Internet interactions that each endpoint has. This is the battlefield!
You are looking for products that reduce your exposure. Your investments must protect your information from unknown Internet programs that run on your endpoints, while still supporting such programs seamlessly. This isolation technology must be simple and robust, like disposable gloves in a hospital. It must be designed such that it costs the adversary significant time and money to try to break through. Ideally, you must also be able to fool the adversary into thinking that they have succeeded, while gathering intelligence about the nature of the attack. Techniques like Bromium’s micro-virtualization let you do this.
You will also need new products that let you continuously visualize and monitor your risk at the Internet endpoint level, and provide end-to-end encryption and robust identity authentication. Your compliance, device management, and insider-threat monitoring systems must also work within this framework.
Plan Ahead or Fall Behind
A very senior executive, i.e., you, Mr. CEO, is going to have to micro-manage the plan to mitigate the risk of cyber-attacks. This is a time of great risk to our organizations, so leaders must follow their own business instincts.
How will you figure out the products that will make up your new security architecture? This is quite straightforward- just ask Marc Andreessen, the venture capitalist, or Phil Venables of Goldman Sachs for a list of 5-10 startup companies with innovative Security 2.0 products. Ignore any company that is not run by its founders. You must partner with people with long-term goals towards your economic victory against the cyber-adversary, and who are thinking beyond just a quick transaction.
Ask the startup leaders to come and pitch their solutions to you personally. Have them convince you of the efficacy of their approach. If you don’t understand what is being said, or if you don’t see how the proposed solution raises the economic costs to the adversary by orders of magnitude, it is not worth your while. Select what you truly believe in, and then help the startups help you!
Unless you have one already, hire a top-notch CISO as a partner for this project. For suggestions on whom to hire, ask any one of Jim Routh (Aetna), Tim Dawson (JP Morgan Chase), Roland Cloutier (ADP), John Zepper (US Department of Energy), Tim McKnight (GE), Sunil Seshadri (VISA), Mark Morrison (State Street), or Bob Bigman (former CISO of the CIA). These are some of the modern-day Knights of the Round Table in the realm of cyber-security, and understand the economic principles underlying this fight.
While you transform your security infrastructure to turn the economic odds back against the adversary, your company might look like an “Under Construction” zone. Some users will complain loudly, and you will have to make an effort to have the business running smoothly while the transformation is in play. Nothing worth doing is ever easy, and you must be prepared to see this through. The risk of inaction is worse.
Update: Breaking News: ICANN targeted in a spear phishing attack
Information security becomes increasingly important as the frequency of cyber attacks increases. From Target to Sony, the past 12 months have played host to the largest volume of attacks in recent memory. We are witnessing the rise of the targeted attack, which is frequently accompanied by spear phishing campaigns.
Phishing is not new. I recall receiving suspicious emails and messages on my America Online account in the 1990s, warning that my account would be suspended unless I replied to provide my password. Similar scams persist for online banking, eBay and PayPal. Cyber criminals show no signs of abandoning phishing because it continues to work.
In 2010, Google announced that it had been compromised by spear phishing during “Operation Aurora.” Likewise, RSA fell victim to spear phishing in 2011. More recently, the Target breach in 2013 can be traced back to a spear phishing email. It seems that the easiest way to infect a major enterprise is to ask an employee to click on an infected file.
Spear phishing is insidious because it preys upon the weakest link of information security systems, its users. Social engineering entices users to click on malicious documents and URLs by suggesting they may be related to work, such as budgets, invoices or shipping notification. Truly advanced attacks may leverage social networking, such as LinkedIn, to customize spear phishing emails.
Ultimately, the goal of these spear phishing attacks is to execute undetectable malware, which evades traditional security solutions, such as antivirus. Once the initial endpoint is compromised, the attack can proliferate across the network before exfiltrating data to command and control servers.
This Thursday, December 18, Bromium will be hosting a Webinar, “The Tip of the Spear: Defeating Spear-Phishing.” Join Bromium Sr. Director of Products Bill Gardner to learn:
- Why cybercriminals are ramping up their spear-phishing attacks
- The most common methods used in these attacks to ‘get the click’
- A revolutionary new approach that can actually counter these attacks and secure both your endpoint and your network
Register today: http://learn.bromium.com/webr-tip-of-the-spear.html