17. January 2017 · Comments Off on Evolution of Network Intrusion Detection · Categories: Network Security · Tags: ,

Traditional Network Intrusion Detection Systems (NIDS), which became popular in the late 1990s, still have limited security efficacy. I will briefly discuss the issues limiting NIDS effectiveness, attempts at improvements that provided only minimal incremental advances, the underlying design flaw, and a new approach that shows a lot of promise.

The initial problem with signature-based NIDS was “tuning.” When they are tuned too loosely, they generate too many false positives. When tuned too tightly, false negatives become a problem. Furthermore, most organizations don’t have the resources for continual tuning.

In the early 2000s, Security Information and Event Management (SIEM) systems were developed, in part, to address this issue. The idea was to leave the NIDS loosely tuned, and let the SIEM’s analytics figure out what’s real and what’s not by correlating with other log sources and contextual information. Unfortunately, SIEMs still generate too many false positives, and can only alert on predefined patterns of attack.

Over the next 15+ years, there have been several innovations in NIDS and network security. These include (1) using operating system and application data to reduce false positives and reduce tuning efforts, (2) complementing NIDS with sandboxed file detonation, (3) adding machine learning based static file analysis, and (4) reducing the network attack surface with next generation firewalls.

There has also been a school of thought claiming anomaly detection is the answer to complement or even replace signature-based NIDS. Different statistical approaches have been tried over the years, the latest being various types of machine learning.

However, we are still seeing far too many successful cyber attacks that take weeks and even months to detect. The question is why?

The underlying design flaw for the last 20 years in virtually all NIDS is that they are restricted to examining an individual packet at line speed, deciding if it’s good or bad, and then going on to the next packet. There are some NIDS that are session oriented rather than stream oriented, but the decision-making time frame is still line speed. If malware or a protocol violation is detected, the NIDS generates an alert. Typically the alert is sent to a log repository or SIEM for further analysis.

This approach means that network security expert have been limiting themselves to detection algorithms that can be used in an appliance (physical or virtual) at line speeds. This is the key flaw that must be addressed in order to enable these experts to build an advanced NIDS with dramatically improved efficacy.

The cost effectiveness of cloud computing and storage makes a new kind of NIDS possible. Now all you need on the network, at the perimeter, on internal subnets, and/or in your cloud environment, are lightweight software sensors to capture, filter, and compress full packet streams. The captured packets are stored in the cloud. This means that traditional line speed analysis is only step one.

The full packets are available for retrospective analysis based on new incoming threat intelligence.  And because full packets are available, threat intelligence not only includes IP addresses, URLs, and file hashes, but new signatures built to detect attacks on newly discovered vulnerabilities. Ideally, this is done automatically by the vendor with no effort required by the customer. Also, adding new signatures from your own sources is supported.

Furthermore, additional methods of analysis are performed including anomaly detection, machine learning static file analysis, and sandboxed file detonation. And the solution uses multiple correlation methods to analyze the results of these different processes. In other words, initially detected weak signals are correlated to generate strong signals with a low false positive rate and without increasing false negatives.

Alerts are categorized using the Lockheed Martin Kill Chain™ to enable the SOC analyst to prioritize his efforts. The NIDS user interface provides layered levels of detail, down the packets if necessary, showing why the alert was generated, thus shortening the time needed to triage each alert.

Finally, new methods of analysis can be added in the cloud without worrying about on-premise appliances having to be forklift upgraded due to added processing and/or memory requirements.

This is the type of solution I recommend you evaluate to reduce the risk of successful cyber attacks. Dare I call it a “Next Generation NIDS?” “Advanced NIDS?” What would you call this solution?

This article was originally posted on LinkedIn. https://www.linkedin.com/pulse/evolution-network-intrusion-detection-bill-frank?trk=mp-author-card

02. November 2016 · Comments Off on Cybersecurity Architecture in Transition to Address Incident Detection and Response · Categories: blog · Tags: ,

I would like to discuss the most significant cybersecurity architectural change in the last 15 years. It’s being driven by new products built to respond to the industry’s abysmal incident detection record.

Overview

There are two types of products driving this architectural change. The first is new “domain-level” detection countermeasures, primarily endpoint and network domains. They dramatically improve threat detection by (1) expanding the types of detection methods used and (2) leveraging threat intelligence to detect zero day threats.

The second is a new type of incident detection and response productivity tool, which Gartner would categorize as SOAR (Security Operations, Analysis and Reporting). It provides SOC analysts with (1) playbooks and orchestration to improve analyst productivity, efficiency, and consistency, (2) automated correlation of alerts from SIEMs and the above mentioned “next-generation” domain-based countermeasures, (3) the ability to query the domain-level countermeasures and other sources of information like CMDBs and vulnerability management solutions via their APIs to provide SOC analysts with improved context, and (4) a graphical user interface supported by a robust database that enables rapid SOC analyst investigations.

I am calling the traditional approach we’ve been using for the last 15 years, “Monolithic,” because a SIEM or a single log repository has been at the center of the detection process. I’m calling the new approach “Composite” because there are multiple event repositories – the SIEM/log repository and those of the domain-level countermeasures.

First I will review why this change is needed, and then I’ll go into more detail about how the Composite architecture addresses the incident detection problems we have been experiencing.

Problems with SIEM

Back around 2000, the first Security Information and Event Management (SIEM) solutions appeared. The rationale for the SIEM was the need to consolidate and analyze the events, as represented by logs, being generated by disparate domain technologies such as anti-virus, firewalls, IDSs, and servers. While SIEMs did OK with compliance reporting and forensics investigation, they were poor at incident detection. The question is, why?

First, for the most part, SIEMs are limited to log analysis. While most of the criticism of SIEMs relates to the limitations of rule based alerting, more on that below, limiting analysis to logs is a problem in itself for two reasons. One, capturing the details of actual activities from logs is difficult. Two, a log often represents just a summary of an event. So SIEMs were handicapped by their data sources.

Additionally, the SIEM’s rule-based analysis approach only alerts on predefined, known bad scenarios. New, creative, unknown scenarios are missed. Tuning rules for a particular organization is also very time consuming, especially when the number of rules that need to be tuned can run into the hundreds.

Another issue that is often overlooked is that SIEM vendors are generalists. They know a little about a lot of domains, but don’t have the same in-depth knowledge about a specific domain as a vendor who specializes in that domain. SIEM vendors don’t know as much about endpoint security as endpoint security vendors, and they don’t know as much about network security as network security vendors.

Finally, SIEMs have not addressed two other key issues that have plagued security operations teams for years – (1) lack of consistent, repeatable processes among different SOC analysts, and (2) mind-numbing repetitive manual tasks that beg to be automated. These issues, plus SIEMs high rate of false positives sap SOC team morale which results in high turnover. Considering cybersecurity’s “zero unemployment” environment, this is a costly problem indeed.

Problems with Traditional Domain-level Countermeasures

But the industry’s poor incident detection track record is not just due to SIEMs. Traditional domain-level detection products also bear responsibility. Let me explain.

Traditional domain-level detection products, whether endpoint or networking, must make their “benign (allow) vs suspicious (alert but allow) vs malicious (alert and block)” decisions in micro or milliseconds. A file appears. Is it malicious or benign? The anti-virus software on the endpoint must decide in a fraction of a second. Then it’s on to the next file. In-line IDS countermeasures face a similar problem. Analyze a packet. Good, suspicious, or malicious? Move on to the next packet. In some out-of-band cases, the countermeasure has the luxury of assembling a complete file before making the decision. File detonation/sandboxing products can take longer, but are still limited to minutes at most. Then it’s on to the next file.

So it’s really been the combination of the limitations of traditional domain-level countermeasures and SIEMs that have resulted in the poor record of incident detection, high false positive rates, and low morale and high turnover among SOC analysts. But there is hope. I am seeing a new generation of security products built around a new, Composite architecture that addresses these issues.

Next Generation Domain-level countermeasures

First, there are new, “next generation,” security domain-level companies that have expanded their analysis timeframe from microseconds to months. A next-gen endpoint product not only analyzes events on the endpoint itself, but collects and stores hundreds of event types for further analysis over time. This also gives the next-gen endpoint product the ability to leverage threat intelligence, i.e. apply new threat intelligence retrospectively over the event repository to detect previously unknown zero-day threats.

A “next-gen” network security vendor collects, analyzes, and stores full packets. With full packets captured, the nature of threat intelligence actually expands beyond IP addresses, URLs, and file hashes to include new signatures. In addition, combinations of “weak” signals can be correlated over time to generate high fidelity alerts.

These domain specific security vendors are also using machine learning and other statistical algorithms to detect malicious scenarios across combinations of multiple events that traditional rule-based analysis would miss.

Finally, these next-gen domain-level countermeasures provide APIs that (1) enables a SOAR product to pull more detailed event information to add context for the SOC analyst, and (2) enables threat hunting with third party threat intelligence.

Architectural issue created by next-gen domain-level countermeasures

But replacing traditional domain specific security countermeasures with these next gen ones actually creates an architectural problem. Instead of having a single Monolithic event repository, i.e. the SIEM or log repository, you have multiple event repositories because it no longer makes sense to add the next-gen domain specific event data into what has been your single event repository. Why? First, the analysis of the raw domain data has already been done by the domain product. Second, if you want to access the data, you can via APIs. Third, as already stated, the type of analysis a SIEM does has not been effective at detecting incidents anyway. Fourth, you are already paying the domain-level vendor for storing the data. Why pay the SIEM or log repository vendor to store that data again?

Having said all this, your primary log repository is not going away anytime soon because you still need it for traditional log sources such as firewall, Active Directory, and Data Loss Prevention. But, over time, there will be fewer traditional countermeasures as these vendors expand their analyses timeframes. Some are already doing this.

So by embracing these next-gen domain specific countermeasures we are creating multiple silos of security information that don’t talk to each other. So how do we correlate these different domains if the events they generate are not in a single repository?

Security Operations, Analysis, and Reporting (SOAR)

This issue is addressed by a new type of correlation analysis product, the second architectural component, which Gartner calls Security Operations, Analysis, and Reporting (SOAR). I believe Gartner first published research on this in the fall of 2015. Here are my SOAR solution requirements:

  1. Receive and correlate alerts from the next-gen domain-level security products.
  2. Query the next-gen domain-level products for more detailed information related to each alert to provide context.
  3. Access CMDBs and vulnerability repositories for additional context.
  4. Receive and correlate alerts from SIEMs. Rule-based alerts from SIEMs need to be correlated by entity, i.e. user.
  5. Query Splunk and other types of log repositories.
  6. Correlate alerts and events from all these sources.
  7. Take threat intelligence feeds and generate queries to the various data repositories. While the next-gen domain specific countermeasures have their own sources of threat intelligence, I fully expect organizations to still subscribe to product independent threat intelligence.
  8. Use a robust database of its own, preferably a graph database, to store all this collected information, and provide fast query responses to SOC analysts pivoting on known data during investigations.
  9. Provide playbooks and the tools to build and customize playbooks to assure consistent incident response processes.
  10. Provide orchestration/automation functions to reduce repetitive manual tasks.

User Entity Behavior Analysis (UEBA)

At this point, you may be thinking, where does User Entity Behavior Analysis (UEBA) fit in? If you are concerned about a true insider threat, i.e. malicious user activity with no malware involved, then UBA is a must. UBA solutions definitely fit into the Composite architecture querying multiple event repositories and sending alerts to the SOAR solution. They should also have APIs so they can be queried by the SOAR solution.

Future Evolution

Looking to the future, I expect the Composite architecture to evolve. Here are some possibilities:

  • A UEBA solution could add SOAR functionality
  • A SIEM solution could add UEBA and/or SOAR functionality
  • A SOAR solution could add log repository and/or SIEM functionality
  • A next-gen domain solution could add SOAR functionality

Summary

To summarize, the traditional Monolithic architecture consisting of domain countermeasures that are limited to microsecond/millisecond analysis that feed a SIEM for incident detection has failed. It’s being replaced by the Composite model featuring (1) next-gen domain-level countermeasures that play an expanded analysis role, (2) for the near term, traditional SIEMs and/or primary log repositories continue to play their roles for traditional security countermeasures, and (3) a SOAR solution at the “top of the stack” that is the SOC analysts’ primary incident detection and response tool.

Originally posted on LinkedIn on November 2, 2016

https://www.linkedin.com/pulse/cybersecurity-architecture-transition-bill-frank?trk=mp-author-card

20. July 2015 · Comments Off on The evolution of SIEM · Categories: Uncategorized · Tags: , , ,

In the last several years, a new “category” of log analytics for security has arisen called “User Behavior Analytics.” From my 13-year perspective, UBA is really the evolution of SIEM.

The term “Security Information and Event Management (SIEM)” was defined by Gartner 10 years ago. At the time, some people were arguing between Security Information Management (SIM) and Security Event Management (SEM). Gartner just combined the two and ended that debate.

The focus of SIEM was on consolidating and analyzing log information from disparate sources such as firewalls, intrusion detection systems, operating systems, etc. in order to meet compliance requirements, detect security incidents, and provide forensics.

At the time, the correlation was designed mostly around IP addresses, although some systems could correlate using ports and protocols, and even users. All log sources were in the datacenter. And most correlation was rule-based, although there was some statistical analysis done as early as 2003. Finally, most SIEMs used relational databases to store the logs.

Starting in the late 2000s, organizations began to realize that while they were meeting compliance requirements, they were still being breached due to the limitations of “traditional” SIEM solutions’ incident detection capabilities as follows:

  • They were designed to focus on IP addresses rather than users. At present, correlating by IP addresses is useless given the increasing number of remote and mobile users, and the number of times a day those users’ IP addresses can change. Retrofitting the traditional SIEM for user analysis has shown to be difficult.
  • They are notoriously difficult to administer. This is due mostly to the rule-based method of event correlation. Customizing and keeping up-to-date hundreds of rules is time consuming. Too often organizations did not realize this when they purchased the SIEM and therefore under-budgeted resources to administer it.
  • They tend to generate too many false positives. This is also mostly due to rule-based event correlation. This is particularly insidious as analysts start to ignore alerts because investigating most of them turns out to be a waste of time. This also affects morale resulting in high turnover.
  • They miss true positives because either the generated alerts are simply missed by analysts overwhelmed by too many alerts, or there was no rule built to detect the attackers activity. The rule-building cycle is usually backward looking. In other words, an incident happens and then rules are built to detect that situation should it happen again. Since attackers are constantly innovating, the rule building process is a losing proposition.
  • They tend to have sluggish performance in part due to organizations underestimating, and therefore under-budgeting, infrastructure requirements, and due to the limitations of relational databases.

In the last few years, we have seen a new security log analysis “category” defined as “User Behavior Analytics (UBA), which focuses on analyzing user credentials and user oriented event data. The data stores are almost never relational, and the algorithms are mostly machine learning which are predictive in nature and require much less tuning.

Notice how UBA solutions address most of the shortcomings of traditional SIEMs for incident detection. So the question is why is UBA considered a separate category? It seems to me that UBA is the evolution of SIEM – better user interfaces (in some cases), better algorithms, better log storage systems, and a more appropriate “entity” on which to focus, i.e. users. In addition, UBAs can support user data coming from SaaS as well as on-premise applications and controls.

I understand that some UBA vendors’ short-term, go-to-market strategy is to complement the installed SIEM. It seems to me this is the justification for considering UBA and SIEM as separate product categories. But my question is, how many organizations are going to be willing to use two or three different products to analyze logs?

In my view, in 3-5 years there won’t be a separate UBA market. The traditional SIEM vendors are already attempting to add UBA capabilities with varying degrees of success. We are also beginning to see SIEM vendors acquire UBA vendors. We’ll see how successful the integration process will be. A couple of UBA vendors will prosper/survive as SIEM vendors due to a combination of superior user interface, more efficacious analytics, faster and more scalable storage, and lower administrative costs.

07. February 2014 · Comments Off on Jumping to conclusions about the Target breach · Categories: Uncategorized · Tags: , , , ,

On Feb 5, 2014 Brian Krebs published a story which provided more details about the Target breach entitled, Target Hackers Broke in Via HVAC Company. The story connects the Target breach to the fact that Target allowed Fazio Mechanical Services, a provider of refrigeration and HVAC systems to remotely connect to Target stores in the Pennsylvania area. Fazio provides these same services to Trader Joe’s, Whole Foods, and BJ’s Wholesale Club in Pennsylvania, Maryland, Ohio, Virginia, and West Virginia. Krebs goes on to say that this practice is common and why.

Krebs rightly never jumps to a conclusion about how this remote access resulted in the breach because there are no known facts on which to base such a conclusion. However that did not stop Network World from publishing a story on Feb 6, 2014 that the Target breach happened because of a basic network segmentation error. The problem with the story is that no one has shown, much less stated, that the attackers’ ability to move around the network was due to an error in network segmentation in the Target stores.

In fact, one of the commenters, “LT,” in the Krebs story actually stated:

Target does have separate VLANs for Registers, Security cameras, office computers, registry scanners/kiosks, even a separate VLAN for the coupon printers at the registers. The problem is not lack of VLAN’s, they use them everywhere and each VLAN is configured for exactly the number of devices it needs to support. The problem is somehow lateral movement was allowed that allowed the hackers to enter in through the HVAC system and eventually get to the POS VLAN.

So there are really TWO possible conclusions one can draw from this, not just the one Network World jumped to:

  1. There were in fact VLAN configuration errors that more easily allowed the attackers to move around undetected.
  2. The attackers knew how to circumvent VLAN control. For some reason Network World failed to consider this possibility. To me, this is a reasonable alternative. VLAN hopping is a well-understood attack vector.

So one might ask, why was Target relying on VLANs for network segmentation rather than firewalls? Based on my interpretation of the PCI DSS 3.0 Requirements and Security Assessment Procedures published in November 2013, there is no requirement to deploy firewalls in stores. Requirement 1.3 is fairly clear that firewalls are only relevant when there is an Internet (public) connection present. Based on my experience, retail stores do not have direct Internet access. They communicate on “private” networks to internal datacenters. Therefore, the use of VLANs to segment store traffic is not a violation of PCI DSS requirements.

Finally, even if PCI DSS specified “stateful inspection” firewalls were deployed in stores, they do not provide adequate network security control against attackers, as I wrote previously,

 

 

 

 

28. January 2014 · Comments Off on Prioritizing Vulnerability Remediation – CVSS vs. Threat Intelligence · Categories: Uncategorized · Tags: , , , ,

The CVSS vulnerability scoring system is probably the most popular method to prioritize vulnerability remediation. Unfortunately, it’s wildly inaccurate. Dan Geer, CISO for In-Q-Tel, and Michael Roytman, the predictive analytics engineer at Risk I/O published a paper in December 2013, entitled Measuring vs. Modeling that shows empirically just how bad CVSS is.

The authors had access to 30 million live vulnerabilities across 1.1 million assets from 10,000 organizations. In addition, they had another data set of SIEM logs of 20,000 organizations from which they extracted exploit signatures. They then paired those exploits with vulnerability scans of the same organizations. The time period for their analysis was June to August 2013.

Although the two sets of data come from different organizations, the authors believe that data sets are large enough that correlating them produces significant insights. Maybe more importantly, they say, “Because this is observed data, per se, we contend that it is a better indicator than the qualitative analysis done during CVSS scoring.”

The first step of their analysis was to establish a base rate, i.e. the probability that a randomly selected vulnerability is one that resulted in a breach. They determined that the base rate was 2%. Then they used CVSS numbers to correlate vulnerabilities to breaches. A CVSSv2 score of 9 resulted in 2.4%, and a CVSSv2 score of 10 resulted in 3.5%.

So how did Threat intelligence do? As a proxy for threat intelligence they used the Exploit-DB, Metasploit individually and combined. The numbers for these were 12.6%,  25.1%, and 29.2% respectively!! Clearly, using Exploit-DB and Metasploit together were almost 10 times better than CVSSv2!!

This jives with other similar work done by Luca Allodi from the University of Toronto. He found that that 87.8% of vulnerabilities that had a CVSS score of 9 or 10 were never exploited. “Conversely, a large portion of Exploit-DB and Symantec’s intelligence go unflagged by CVSS scoring; however, this is still a definitional analysis.”

This Usenix paper is well worth reading in its entirety, as well as the references they provide.

One caveat, the second author’s company, Risk I/O offers a vulnerability prioritization service based on threat intelligence. You might suspect that this study was performed with the end in mind of proving the value of their service. However, I find it hard to believe that Dan Geer would participate in such a scam. Nor do I think Usenix would be easily fooled. In addition, this study had similar results to Luca Allodi’s. I would surely be interested in hearing from anyone who can show that CVSS is a better predictor of vulnerabilities being exploited than threat intelligence.

 

 

 

27. January 2014 · Comments Off on Detecting unknown malware using sandboxing or anomaly detection · Categories: blog · Tags: ,

It’s been clear for several years that signature-based anti-virus and Intrusion Prevention / Detection controls are not sufficient to detect modern, fast-changing malware. Sandboxing as become a popular (rightfully so) complementary control to detect “unknown” malware, i.e. malware for which no signature exists yet. The concept is straightforward. Analyze inbound suspicious files by allowing them to run in a virtual machine environment. While sandboxing has been successful, I believe it’s worthwhile to understand its limitations. Here they are:

  • Access to the malware in motion, i.e. on the network, is not always available.
  • Most sandboxing solutions are limited to Windows
  • Malware authors have developed techniques to discover virtualized or testing environments
  • Newer malware communication techniques use random, one-time domains and non-HTTP protocols
  • Sandboxing cannot confirm malware actually installed and infected the endpoint
  • Droppers, the first stage of multi-stage malware is often the only part that is analyzed

Please check out Damballa’s Webcast on the Shortfalls of Security Sandboxing for more details.

Let me reiterate, I am not saying that sandboxing is not valuable. It surely is. However, due to the limitations listed above, we recommend that it be complemented by a log-based anomaly detection control that’s analyzing one or more of the following: outbound DNS traffic, all outbound traffic through the firewall and proxy server, user connections to servers, for retailers – POS terminals connections to servers, application authentications and authorizations. In addition to different network traffic sources, there are also a variety of statistical approaches available including supervised and unsupervised machine learning algorithms.

So in order to substantially reduce the risk of a data breach from unknown malware, the issue is not sandboxing or anomaly detection, it’s sandboxing and anomaly detection.

20. January 2014 · Comments Off on How Palo Alto Networks could have prevented the Target breach · Categories: blog · Tags: , , , , ,

Brian Krebs’ recent posts on the Target breach, A First Look at the Target Intrusion, Malware, and A Closer Look at the Target Malware, provide the most detailed and accurate analysis available.

The malware the attackers used captured complete credit card data contained on the mag stripe by “memory scraping.”

This type of malicious software uses a technique that parses data stored briefly in the memory banks of specific POS devices; in doing so, the malware captures the data stored on the card’s magnetic stripe in the instant after it has been swiped at the terminal and is still in the system’s memory. Armed with this information, thieves can create cloned copies of the cards and use them to shop in stores for high-priced merchandise. Earlier this month, U.S. Cert issued a detailed analysis of several common memory scraping malware variants.

Furthermore, no known antivirus software at the time could detect this malware.

The source close to the Target investigation said that at the time this POS malware was installed in Target’s environment (sometime prior to Nov. 27, 2013), none of the 40-plus commercial antivirus tools used to scan malware at virustotal.com flagged the POS malware (or any related hacking tools that were used in the intrusion) as malicious. “They were customized to avoid detection and for use in specific environments,” the source said.

The key point I want to discuss however, is that the attackers took control of an internal Target server and used it to collect and store the stolen credit card information from the POS terminals.

Somehow, the attackers were able to upload the malicious POS software to store point-of-sale machines, and then set up a control server within Target’s internal network that served as a central repository for data hoovered by all of the infected point-of-sale devices.

“The bad guys were logging in remotely to that [control server], and apparently had persistent access to it,” a source close to the investigation told KrebsOnSecurity. “They basically had to keep going in and manually collecting the dumps.”

First, obviously the POS terminals have to communicate with specific Target servers to complete and store transactions. Second, the communications between the POS terminals and the malware on the compromised server(s) could have been denied had there been policies defined and enforced to do so. Palo Alto Networks’ Next Generation Firewalls are ideal for this use case for the following two reasons:

  1. Palo Alto Networks enables you to include zone, IP address, port, user, protocol, application information, and more in a single policy.
  2. Palo Alto Networks firewalls monitor all ports for all protocols and applications, all of the time, to enforce these polices to establish a Positive Control Model (default deny or application traffic white listing).

You might very well ask, why couldn’t Router Access Control Lists be used? Or why not a traditional port-based, stateful inspection firewall? Because these types of network controls limit policy definition to ports, IP addresses, and protocols, which cannot enforce a Positive Control Model. They are simply not detailed enough to control traffic with a high degree of confidence. One or the other might have worked in the 1990s. But by the mid-2000s, network-based applications were regularly bypassing both of these types of controls.

Therefore, if Target had deployed Palo Alto Networks firewalls between the POS terminals and their servers with granular policies to control POS terminals’ communications by zone, port, and application, the malware on the POS terminals would never have been able to communicate with the server(s) the attackers compromised.

In addition, it’s possible that the POS terminals may never have become infected in the first place because the compromised server(s) the attackers initially compromised would not have been able to communicate with the POS terminals. Note, I am not assuming that the servers used to compromise the POS terminals were the same servers used to collect the credit card data that was breached.

Unfortunately, a control with the capabilities of Palo Alto Networks is not specified by the Payment Card Industry (PCI) Data Security Standard (DSS). Yes, “Requirement #1: Install and maintain a firewall configuration to protect cardholder data,” seems to cover the subject. However, you can fully meet these PCI DSS requirements with a port-based, stateful inspection firewall. But, as I said above, an attacker can easily bypass this 1990s type of network control. Retailers and e-Commerce sites need to go beyond PCI DSS to actually protect themselves. You need is Next Generation Firewall like Palo Alto Networks which enables you to define and enforce a Positive Control.

06. January 2014 · Comments Off on Two views on FireEye’s Mandiant acquisition · Categories: blog · Tags: , , , , , ,

There are two views on the significance of FireEye’s acquisition of Mandiant. One is the consensus typified by Arik Hesseldahl, Why FireEye is the Internet’s New Security Powerhouse. Arik sees the synergy of FireEye’s network-based appliances coupled with Mandiant’s endpoint agents.

Richard Stiennon as a different view, Will FireEye’s Acquistion Strategy Work? Richard believes that FireEye’s stock price is way overvalued compared to more established players like Check Point and Palo Alto Networks. While FireEye initially led the market with network-based “sandboxing” technology to detect unknown threats, most of the major security vendors have matched or even exceeded FireEye’s capabilities. IMHO, you should not even consider any network-based security manufacturer that doesn’t provide integrated sandboxing technology to detect unknown threats. Therefore the only way FireEye can meet Wall Street’s revenue expectations is via acquisition using their inflated stock.

The best strategy for a high-flying public company whose products do not have staying power is to embark on an acquisition spree that juices revenue. In those terms, trading overvalued stock for Mandiant, with estimated 2013 revenue of $150 million, will easily satisfy Wall Street’s demand for continued growth to sustain valuations. FireEye has already locked in 100% growth for 2014.

It will probably take a couple of years to determine who is correct.

 

 

04. November 2013 · Comments Off on Response to Stiennon’s attack on NIST Cybersecurity Framework · Categories: blog · Tags: , , , ,

In late October, NIST issued its Preliminary Cybersecurity Framework based on President Obama’s Executive Order 13636, Improving Critical Infrastructure Cybersecurity.

The NIST Cybersecurity Framework is based on one of the most basic triads of information security – Prevention, Detection, Response. In other words, start by preventing as many threats as possible. But you also must recognize that 100% prevention is not possible, so you need to invest in Detection controls. And of course, there are going to be security incidents, therefore you must invest in Response.

The NIST Framework defines a “Core” that expands on this triad. It defines five basic “Functions” of cybersecurity – Identify, Protect, Detect, Respond, and Recover. Each Function is made up of related Categories and Subcategories.

Richard Stiennon, as always provocative, rales against the NIST Framework, calling it “fatally flawed,” because it’s “poisoned with Risk Management thinking.” He goes on to say:

The problem with frameworks in general is that they are so removed from actually defining what has to be done to solve a problem. The problem with critical infrastructure, which includes oil and gas pipelines, the power grid, and city utilities, is that they are poorly protected against network and computer attacks. Is publishing a turgid high-level framework going to address that problem? Will a nuclear power plant that perfectly adopts the framework be resilient to cyber attack? Are there explicit controls that can be tested to determine if the framework is in place? Sadly, no to all of the above.

He then says:

IT security Risk Management can be summarized briefly:

1. Identify Assets

2. Rank business value of each asset

3. Discover vulnerabilities

4. Reduce the risk to acceptable value by patching and deploying defenses around the most critical assets

He then summarizes the problems with this approach as follows:

1. It is impossible to identify all assets

2. It is impossible to rank the value of each asset

3. It is impossible to determine all vulnerabilities

4. Trying to combine three impossible tasks to manage risk is impossible

Mr. Stiennon’s solution is to focus on Threats.

How many ways has Stiennon gone wrong?

First,  if your Risk Management process is as Stiennon outlines, then your process needs to be updated. Risk Management is surely not just about identifying assets and patching vulnerabilities. Threats are a critical component of Risk Management. Furthermore, while the NIST Framework surely includes identifying assets and patching vulnerabilities, they are only two Subcategories within the rich Identify and Protect Functions. The whole Detect Function is focused on detecting threats!! Therefore Stiennon is completely off-base in his criticism. I wonder if he actually read the NIST document.

Second, all organizations perform Risk Management either implicitly or explicitly. No organization has enough money to implement every administrative and technical control that is available. And that surely goes for all of the controls recommended by the NIST Framework’s Categories and Subcategories. Even the organizations that want to fully commit the NIST Framework still will need to prioritize the order in which controls are implemented. Trade-offs have to be made. Is it better to make these trade-offs implicitly and unsystematically? Or is it better to have an explicit Risk Management process that can be improved over time?

I am surely not saying that we have reached the promised land of cybersecurity risk management, just as we have not in virtually any other field to which risk management is applied. There is a lot of research going on to improve risk management and decision theory. One example is the use of Prospect Theory.

Third, if IT security teams are to communicate successfully with senior management and Boards of Directors, explain to me how else to do it? IT security risks, which are technical in nature, have to be translated into business terms. That means, how will a threat impact the business. It has to be in terms of core business processes. Is Richard saying that an organization cannot and should not expect to identify the IT assets related to a specific business process? I think not.

When we in IT security look for a model to follow, I believe it should be akin to the role of lawyers’ participation in negotiating a business transaction. At some point, the lawyers have done all the negotiating they can. They then have to explain to the business executives responsible for the transaction the risks involved in accepting a particular paragraph or sentence in the contract. In other words, lawyers advise and business executives decide.

In the same way, it is up to IT security folks to explain a particular IT security risk in business terms to the business executive, who will then decide to accept the risk or reduce it by allocating funds to implement the proposed administrative or technical control. And of course meaningful metrics that can show the value of the requested control must be included in the communication process.

Given the importance of information technology to the success of any business, cybersecurity decisions must be elevated to the business level. Risk Management is the language of business executives. While cybersecurity risk management is clearly a young field, we surely cannot give up. We have to work to improve it. I believe the NIST Cybersecurity Framework is a big step in the right direction.

 

23. October 2013 · Comments Off on Detection Controls Beyond Signatures and Rules · Categories: blog · Tags: ,

Charles Kolodgy of IDC has a thoughtful post on SecurityCurrent entitled, Defending Against Custom Malware: The Rise of STAP.

STAP (Specialized Threat Analysis and Protection) technical controls are designed to complement, maybe in the future replace, traditional detection controls that require signatures and rules. STAP controls focus on threats/attacks that have not been seen before or that can morph very quickly and therefore are missed by signature-based controls.

Actors such as criminal organizations and nation states are interested in the long haul.  They create specialized malware, intended for a specific target or groups of targets, with the ultimate goal of becoming embedded in the target’s infrastructure.  These threats are nearly always new and never seen before.  This malware is targeted, polymorphic, and dynamic.  It can be delivered via Web page, spear-phishing email, or any other number of avenues.

Mr. Kolodgy breaks STAP controls into three categories:

  • Virtual sandboxing/emulation and behavioral analysis
  • Virtual containerization/isolation
  • Advanced system scanning

Based on Cymbel’s research, we would create fourth category for Advanced log analysis. There is considerable research and funded companies going beyond traditional rule- and statistical/threshold-based techniques. Many of these efforts are levering Hadoop and/or advanced Machine Learning algorithms.