Category Archives: Network Security

The Changing Landscape of Selling in the Age of SDN

There are massive waves of technology upheaval taking place in the marketplace, causing disruption and providing a challenge to technology salespeople who are used to selling in the traditional ways. Cloud, Automation, Mobility, Adaptive Security and the Internet of Things are just a few of the major changes affecting the landscape right now. And while these technologies are certainly challenging in their own right, there is one technology that stands on it’s own, not only in terms of how technology decisions are made, but also how technology is bought.

That technology is Software Defined Networking (SDN). SDN is causing a fundamental shift in the way that technology is procured. There is a major shift away from buying point products and technologies which only meet a specific need and instead looking at the bigger picture with an aim of technology procurement fitting into a larger ecosystem that is providing broader solutions, enabling shorter ROI and better business agility.

Application-Centricity Creates New Stakeholders

The buying process used to be relatively straightforward, and different technology groups within an organization could procure technology within their own silo with little regard to how it fit within the broader ecosystem. Often times, the technology implemented would dictate and limit what applications could be run on it. Now the shift is towards an application-centric buying framework. The overarching driver is that the applications themselves are the driving force behind technology purchases, and these purchases must work in a tightly sophisticated, integrated ecosystem that consists of multiple compute, storage and networking resources. And more often than not, security is also being woven into that fabric as well. No longer can these be looked at as individual elements – furthermore, new tools that are emerging to orchestrate all these resources simultaneously are forcing the siloed buyers of the past to work cohesively with other teams and determine if technology meets the overarching requirements of various applications.

As a result, there are new stakeholders. The new stakeholders are the application architects, the cloud architects, and more commonly the DevOps teams that are responsible for overseeing the collaboration between developers and IT teams. These stakeholders will be responsible for ensuring that technology purchases are able to seamlessly integrate into a fully orchestrated system.

Adapting to the Changing Landscape

By better understanding the changing landscape of the buyer in the broader ecosystem, we must understand that we can no longer sell individual pieces of the puzzle. We must understand the whole puzzle itself and be able to sell the various solutions collectively which answer the needs of the buyers. Understanding that the whole is greater than the sum of all of its parts. Salespeople and technology professionals must be well versed in a diverse range of technologies in order to be able to speak to the technology buyer, with a goal of providing broader solutions that meet the “application-centric” nature that these buyers demand.

Furthermore, as solutions move towards a software-based model, there are implications which need to be understood as increasingly business will being conducted on a “subscription-based” model. This means that we need to understand how to maintain revenue streams in light of the fact that subscription based pricing models and the revenue associated with it are quite different compared to traditional revenue streams.

It also affects the channel in that various channels must come together to deliver solutions to their customers that integrate cohesively in a much larger framework. It’s no longer acceptable to simply deploy gear based on “speeds and feeds”. The channel needs to demonstrate that their solutions work cohesively in a larger ecosystem, and demonstrate that they have strong partnerships to meet these needs. These partnerships are going to be increasingly important as customers tie together components from multiple vendors and expect a seamless, integrated and highly orchestrated ecosystem.

In order to capitalize on this changing landscape and maintain revenue streams, a different approach needs to be taken with customers. First off will be identifying the new stakeholders. Once the stakeholders are identified, they will need to be approached with an overarching vision of their business problems and needs, with a focus on the broader solutions encompassing a diverse set of technologies that need to work cohesively. The successful technology salesperson of tomorrow will assist their customers by showing them fully orchestrated solutions that meet their needs while at the same time driving down costs, enabling them to be more agile and accomplish more in a shorter time.

Carrier Grade NAT and the DoS Consequences

Republished from Corero DDoS Blog:

The Internet has a very long history of utilizing mechanisms that may breathe new life into older technologies, stretching it out so that newer technologies may be delayed or obviated altogether. IPv4 addressing, and the well known depletion associated with it, is one such area that has seen a plethora of mechanisms employed in order to give it more shelf life.

In the early 90s, the IETF gave us Classless Inter-Domain Routing (CIDR), which dramatically slowed the growth of global Internet routing tables and delayed the inevitable IPv4 address depletion. Later came DHCP, another protocol which assisted via the use of short term allocation of addresses which would be given back to the provider’s pool after use. In 1996, the IETF was back at it again, creating RFC 1918 private addressing, so that networks could utilize private addresses that didn’t come from the global pool. Utilizing private address space gave network operators a much larger pool to use internally than would otherwise have been available if utilizing globally assigned address space — but if they wanted to connect to the global Internet, they needed something to translate those addresses. This is what necessitated the development of Network Address Translation (NAT).

NAT worked very well for many, many years, and slowed the address depletion a great deal. But in order to perform that translation, you still needed to aquire at least one globally addressable IP. As such, this only served to slow down depletion but not prevent it – carriers were still required to provide that globally addressable IP from their own address space. With the explosive growth of the Internet of Things, carriers likewise began to run out of address space to allocate.

NAT came to the rescue again. Carriers took notice of the success of NAT in enterprise environments and wanted to do this within their own networks, after all, if it worked for customers it should likewise work for the carriers. This prompted the IETF to develop Carrier Grade NAT (CGN), also known as Large Scale NAT (LSN). CGN aims to provide a similar solution for carriers by obviating the need for allocating publicly available address space to their customers. By deploying CGN, carriers could oversubscribe their pool of global IPv4 addresses while still providing for seamless connectivity, i.e. no truck-roll.

So while the world is spared from address depletion yet again, the use of CGN technologies opens a new can of worms for carriers. No longer does one globally routable IP represent a single enterprise or customer – due to the huge oversubscription which is afforded through CGN, an IP can service potentially thousands of customers.

This brings us to the cross-roads of the Denial of Service (DoS) problem. In the past, when a single global IP represented only one customer network, there was typically no collateral damage to other customer networks. If the DoS was large enough to impact the carrier’s network or if there was collateral damage, they would simply blackhole that customer IP to prevent it from transiting their network. However, with CGN deployments, and potentially thousands of customers being represented by a single IP, blackhole routing is no longer an option.

CGN deployments are vulnerable to DoS in a few different ways. The main issue with CGN is that it must maintain a stateful record of the translations between external addresses and ports with internal addresses and ports. A device which has to maintain these stateful tables is vulnerable to any type of DoS activity that may exhaust the stateful resources. As such, a CGN device may be impacted in both the inbound and the outbound direction. An outbound attack is usually the result of malware on a customers machine, sending a large amount of traffic towards the Internet and consuming the state tables in the CGN. Inbound attacks usually target a particular customer, and take the form of a DoS attack, or a Distributed Denial of Service (DDoS) attack. Regardless of the direction of the attack, a large amount of resources are consumed in the CGN state table, which reduces overall port availability. Left unregulated, these attacks can easily cause impact not only to the intended victim, but potentially the thousands of other customers being serviced by that CGN.

With the inability to simply blackhole a given IP using edge Access Control Lists (ACLs), carriers must look at other options for protecting their customer base. While some CGN implementations have the ability to limit the amount of ports that are allocated to a single customer, these only work in discrete cases and can be difficult to manage. They also do not protect customers if the CGN device is itself the target of the attack.

The solution to this problem is the use of a purpose-built DDoS mitigation device, or what is more commonly referred to as a “scrubbing” device in IT circles. Dedicated DDoS mitigation devices attempt to enforce that everyone plays nicely, by limiting the maximum number of sessions to or from a given customer. This is done by thorough analysis of the traffic in flight and rate-limiting or filtering traffic through sophisticated mitigation mechanisms to ensure fairness of the public IP and port availability across all customers. Through the use of dedicated DDoS mitigation devices, CGN devices and their associated customers are protected from service disruptions, while still ensuring legitimate traffic is allowed unencumbered. Lastly, another important aspect of DDoS mitigation technology is that they tend to be “bumps in a wire”, that is to say, they don’t have an IP address assigned to them and as such cannot be the target of an attack.

Is DDoS Mitigation as-a-Service Becoming a Defacto Offering for Providers?

Republished from Corero DDoS Blog:

It’s well known in the industry that DDoS attacks are becoming more frequent and increasingly debilitating, turning DDoS mitigation into a mission critical initiative. From the largest of carriers to small and mid-level enterprises, more and more Internet connected businesses are becoming a target of DDoS attacks. What was once a problem that only a select few dealt with is now becoming a regularly occurring burden faced by network operators.

In my daily engagements with various customers of all shapes and sizes, it’s truly interesting to see how the approach to DDoS mitigation is changing. Much of this is the result of DDoS mitigation services shifting from a “nice to have” technology to a “must-have”, essential in order to maintain business continuity and availability.

When I built DDoS mitigation and detection services for Verizon back in 2004, the intent was to offer value-add revenue producing services to offer subscribers, in an effort to build out our security offerings. For many years, this concept was one that pretty much every provider I worked with was looking into; build a service with the intent of generating new revenue opportunity from customers when traditional avenues such as simple connectivity and bandwidth offerings were contracting.

However, in the past several months, as I interact with large scale carriers to data center hosting providers, I am seeing a common thread starting to emerge – that is, attracting new customers and retaining existing ones is becoming more difficult in the absence of differentiated value. Compounding this issue is that the customers are starting to expect some of these services as part of their connectivity fees.  What I’m seeing is more and more providers investigating the option of offering DDoS mitigation services to their customers as a virtue of being connected to them, in an effort to attract them away from other providers who have limited service offerings and capabilities.

Could it be that DDoS mitigation services become a standard offering on a provider’s network? Is it feasible that at some point in the future DDoS mitigation will become an inherent capability provided by the service providers?

In order for this approach to become a reality, the economics of the game have to change. Inserting DDoS mitigation elements into the network need to be reasonably inexpensive in order for carriers and hosting providers to justify the cost. The technology also needs to be simple and as close to automatic as possible, as an inherent service offering will not justify the huge expense and uplift of having a team of operations personnel managing the service. Attacks need to be mitigated dynamically and quickly and without the need for manual intervention or the requirement to have to pick up a phone to get assistance. And lastly, whatever mechanisms are in place need to ensure a “do no harm” approach and that there is no collateral damage to good traffic.

At Corero, we believe that we are doing just that; changing not only the economics of the game, but also by fundamentally looking at the problem is a different way. Corero enables real-time, algorithmic identification of network anomalies and subsequent mitigation of the attack traffic, eliminating the DDoS challenge before attacks transit the network and ultimately impact downstream customers.

This concept is realized through, dynamic mitigation bandwidth licensing – a new economic model which represents the use of high scalable DDoS mitigation technology. The ability to modernize DDoS protection, specifically taking advantage of always-on DDoS mitigation through emerging and proven deployment models; such as dedicated in-line deployment of appliance based DDoS mitigation at the peering and transit points is becoming a more common practice with the help of Corero Network Security.

Black Hat OSPF Vulnerabilities: Much Ado About Nothing

Imagine a group of researchers planning to speak at a conference regarding a previously undiscovered vulnerability present in most homes that would allow a thief to rob your home of its valuables with complete ease.  You would probably be interested in hearing what they had to say so you could take the necessary precautions to protect your home.

Now imagine when they presented their findings, they went on to state that it was incredibly easy to do, so long as you left your front door open and also provided them with the security code for any alarm systems.  You would probably find this implausible and simply the proliferation of fear, uncertainty, and doubt.

That’s precisely what happened last week at the well-respected Black Hat security conference in Las Vegas when researchers from the Israel Institute of Technology and Advanced Defense Systems, Ltd. presented their findings of a serious vulnerability present in OSPF.  So serious in fact, the researchers stated the only way to properly mitigate the threat, short of fixing the protocol, is to switch to another routing protocol such as RIP or IS-IS.

The researchers went on to present their research of two previously undiscovered methods of poisoning routing information within an OSPF domain such that bogus information could be propagated and used for a multitude of malicious purposes.

The first attack allows for the creation of a remote false adjacency, whereby an unsuspecting victim router goes on to form an adjacency with a phantom router.  Once this adjacency is formed, the attacker can then flood the victim router with bogus Link-State Advertisements (LSAs).

The second attack allows the flooding of what are known as Disguised LSAs, whereby a victim router would accept spoofed LSAs which it believed were generated by another router within the OSPF domain.

While both of these previously unknown attack vectors might raise some eyebrows and create cause for concern, the researchers made some pretty big assumptions regarding the ability to perform such an attack.  The assumptions were that in order to perform such an attack, one would first need to insert a router into the network, and once this was accomplished they would then somehow need to obtain the MD5 authentication key that is widely used to secure the communications channel between OSPF neighbors.

Without a doubt, there is some merit to what these researchers have found and to a protocol junkie like myself I am always interested to learn the inner-workings of these protocols.  However, what they’ve completely overlooked is the first tenet of information security which is to ensure the physical environment is properly secured.  In most environments, gaining physical access to the network might prove difficult enough.  Nonetheless, if an attacker were able to insert such a device, the net effect would be that such attacks would be short-lived as it would be fairly trivial for a network administrator to isolate the offending device and remove it from the network.

Furthermore, and perhaps the biggest barrier to entry, they’ve failed to recognize that the likelihood of someone obtaining the authentication key.  While MD5 is known to have some serious deficiencies and vulnerable to collision attacks, it still represents a pretty high bar for the would-be attacker.  And various vendors are already rolling out HMAC-SHA for the authentication function within OSPFv2.

In addition, the researchers admitted that in their testing of the first exploit, the victim router tore down the adjacency to the phantom router after 125 seconds because the victim router did not receive the proper acknowledgement packets from the phantom router.  Once again, the net effect is that if the attackers were able to meet the first two assumptions, their attack would still be relatively short-lived.

While OSPF is widely used throughout the Internet as a whole, you probably have better odds of winning the lottery than actually experiencing an attack such as these researchers have demonstrated.  The researchers presented some pretty bold claims, but at the end of the day didn’t really present anything that wasn’t already widely known about the vulnerabilities inherent within the OSPF protocol given the assumptions the researchers made.  Their assumptions are based on two barriers to entry, which doesn’t make it impossible, but nonetheless incredibly difficult and highly implausible.  In my experience, attackers tend to move onto far easier targets.

There are a lot of things to worry about when it comes to protecting your corporate assets, but in my humble opinion, this is not one of them.  So rather than ripping out OSPF as these researchers would have you believe, you might perhaps be better served by taking a second look at your environment and ensuring the physical premises are properly secured.

Juniper SRX Tips :: Altering Default-Deny Behavior

In our previous article, we looked at using apply-groups to alter all the security policies uniformly on an SRX device such that they would all have an implicit logging statement. And while this is fine for all existing policies, it doesn’t log traffic which doesn’t match any explicitly defined security policy.

The reason for this is due to the fact that in Junos, traffic which doesn’t match an explicitly defined security policy matches against the default-deny policy.  However, given the fact that the default-deny policy is implicitly defined, apply-group configurations are of little benefit as apply-groups can only be inherited by those elements which have been explicitly defined.

Often in these cases, administrators will simply choose to create their own deny policies with the desired options and place this deny policy as the last policy for traffic going from one zone to another. However, in instances where there are many zones, it might prove too cumbersome and time consuming to manually configure this to accommodate all zones.

Clearly it would be more beneficial to have something akin to the Global Zone in ScreenOS which can be used to match on all traffic which doesn’t match against any of the explicitly defined security policies.  However, at the time of this writing, Global Zone functionality doesn’t exist in Junos.

The good news is that we can use the power of apply-groups once again to our benefit, this time to create an explicitly defined deny policy which will be inherited at the tail-end of all security policies defined within our configuration. Note that this will encompass both Inter-zone as well as Intra-zone traffic.

For this example, let’s assume that we want to log everything that would normally hit the default-deny policy. Let’s start by taking a look at our baseline configuration:

root@ce-1# show security policies
from-zone Trust to-zone Untrust {
    policy allow-outbound {
        match {
            source-address Users-subnet;
            destination-address any;
            application any;
        }
        then {
            permit;
        }
    }
}
from-zone Untrust to-zone Trust {
    policy allow-web {
        match {
            source-address any;
            destination-address web-server;
            application junos-http;
        }
        then {
            permit;
        }
    }
}

Here you can see we have a policy allowing all traffic outbound from the Users-subnet in the Trust zone towards the Untrust zone, and another policy allowing inbound HTTP traffic from the Untrust zone towards the Web Server in the Trust zone.  Now, in order to change the default-deny behavior and add additional options, we will use an apply-group to inherit a new policy at the tail-end of all previously defined policies, as follows:

groups {
    default-log {
        security {
            policies {
                from-zone <*> to-zone <*> {
                    policy log-all-else {
                        match {
                            source-address any;
                            destination-address any;
                            application any;
                        }
                        then {
                            deny;
                            log {
                                session-init;
                            }
                        }
                    }
                }
            }
        }
    }
}

Finally, let’s apply our apply-group at the [security policies] stanza within our configuration:

root@ce-1# set security policies apply-groups default-log

Now that we’ve completed the configuration, let’s examine the results of the application of our apply-group by taking a look at our security policies, this time by displaying the inherited configuration:

root@ce-1# show security policies | display inheritance
apply-groups default-log
from-zone Trust to-zone Untrust {
    policy allow-outbound {
        match {
            source-address Users-subnet;
            destination-address any;
            application any;
        }
        then {
            permit;
        }
    }
    ##
    ## 'log-all-else' was inherited from group 'default-log'
    ##
    policy log-all-else {
        ##
        ## 'match' was inherited from group 'default-log'
        ##
        match {
            ##
            ## 'any' was inherited from group 'default-log'
            ##
            source-address any;
            ##
            ## 'any' was inherited from group 'default-log'
            ##
            destination-address any;
            ##
            ## 'any' was inherited from group 'default-log'
            ## Warning: application or application-set must be defined
            ##
            application any;
        }
        ##
        ## 'then' was inherited from group 'default-log'
        ##
        then {
            ##
            ## 'deny' was inherited from group 'default-log'
            ##
            deny;
            ##
            ## 'log' was inherited from group 'default-log'
            ##
            log {
                ##
                ## 'session-init' was inherited from group 'default-log'
                ##
                session-init;
            }
        }
    }
}
from-zone Untrust to-zone Trust {
    policy allow-web {
        match {
            source-address any;
            destination-address web-server;
            application junos-http;
        }
        then {
            permit;
        }
    }
    ##
    ## 'log-all-else' was inherited from group 'default-log'
    ##
    policy log-all-else {
        ##
        ## 'match' was inherited from group 'default-log'
        ##
        match {
            ##
            ## 'any' was inherited from group 'default-log'
            ##
            source-address any;
            ##
            ## 'any' was inherited from group 'default-log'
            ##
            destination-address any;
            ##
            ## 'any' was inherited from group 'default-log'
            ## Warning: application or application-set must be defined
            ##
            application any;
        }
        ##
        ## 'then' was inherited from group 'default-log'
        ##
        then {
            ##
            ## 'deny' was inherited from group 'default-log'
            ##
            deny;
            ##
            ## 'log' was inherited from group 'default-log'
            ##
            log {
                ##
                ## 'session-init' was inherited from group 'default-log'
                ##
                session-init;
            }
        }
    }
}

Once again, with just a couple of lines of code we can streamline the configuration to a large extent, in this case creating an explicitly defined deny policy which logs all traffic that would otherwise be silently discarded.  And best of all, we can do so without having to resort to manual configuration of each and every one.

In small installations this technique might be of little benefit, but in larger implementations consisting of dozens of zones with a combination of Interzone and Intrazone and bidirectional security policies, the benefit of such an approach cannot be understated.  Not only will this ease configuration burden, but it will ensure that all traffic which doesn’t match any of the existing security policies will be handled in a consistent manner.  Of course, as with previous examples, if there are certain policies that we don’t want to inherit this new default-deny, we can simply utilize the apply-group-except statement for each of those respective policies.

In our next article we will examine changing the built-in Junos application defaults so that we can customize timers and other parameters.

Users-subnet

Juniper SRX Tips :: Uniform Security Policy Modification

Often there are instances where we want to affect all security policies configured on an SRX device.  For example, let’s say that we have thousands of policies configured on our firewall, and we want to enable logging for every single policy.  Obviously this would take some time if we were to do this manually on each and every individual policy, so an easier way is desired.

In ScreenOS we have the concept of a Global zone which acts as a container encompassing all zones, but to date, Junos does not support a similar functionality on the SRX. Furthermore, the Global zone doesn’t affect existing policies but rather is way to apply a consistent policy to all Inter-zone and Intra-zone traffic that doesn’t match any of the existing policies.

However, despite all of this, there is in fact a methodology we can use to uniformly modify all of the existing security policies on our box, in a manner that is actually much more powerful than what is accomplished in ScreenOS with the Global zone.

Let’s take a look.  First, let’s say we have some policies that we would like to enable logging on:

root@ce-1# show security policies
from-zone Trust to-zone Untrust {
    policy allow-outbound {
        match {
            source-address any;
            destination-address any;
            application any;
        }
        then {
            permit;
        }
    }
}
from-zone Untrust to-zone Trust {
    policy allow-web {
        match {
            source-address any;
            destination-address web-server;
            application junos-http;
        }
        then {
            permit;
        }
    }
}

Here you can see we have a policy allowing all traffic outbound from Trust to Untrust, and another policy allowing inbound HTTP traffic from the Untrust zone towards the Web Server in the Trust zone.  Now, let’s enable logging for all of our policies by using an apply-group and matching on all policies from any zone to any other zone.  Note that this will encompass both Inter-zone as well as Intra-zone traffic:

groups {
    global-logging {
        security {
            policies {
                from-zone <*> to-zone <*> {
                    policy <*> {
                        then {
                            log {
                                session-init;
                            }
                        }
                    }
                }
            }
        }
    }
}

Finally, let’s apply our apply-group at the [security policies] stanza within our configuration:

root@ce-1# set security policies apply-groups global-logging

Now that we’ve completed the configuration, let’s examine the results of the application of our apply-group by taking a look at our security policies, this time by displaying the inherited configuration:

root@ce-1# show security policies | display inheritance
apply-groups global-logging
from-zone Trust to-zone Untrust {
    policy allow-outbound {
        match {
            source-address any;
            destination-address any;
            application any;
        }
        then {
            permit;
            ##
            ## 'log' was inherited from group 'global-logging'
            ##
            log {
                ##
                ## 'session-init' was inherited from group 'global-logging'
                ##
                session-init;
            }
        }
    }
}
from-zone Untrust to-zone Trust {
    policy allow-web {
        match {
            source-address any;
            destination-address web-server;
            application junos-http;
        }
        then {
            permit;
            ##
            ## 'log' was inherited from group 'global-logging'
            ##
            log {
                ##
                ## 'session-init' was inherited from group 'global-logging'
                ##
                session-init;
            }
        }
    }
}

As you can see, with a couple of lines of code we can alter all of the existing policies on our device without having to resort to manual configuration of each and every one. This type of functionality is perfect when we want to have a singular set of configuration elements apply to all of our policies uniformly.  On the other hand, if there are certain policies that we don’t want to inherit these settings, we can simply utilize the apply-group-except statement for each of those respective policies.

In our next article we will examine how to change the default-deny behavior on the SRX to also including logging of denied packets.

ShortestPathFirst Security Articles Now Featured on Infosec Island

I am very pleased to announce that several of my security articles have been published on Infosec Island.  Infosec Island is  an online community focused on the Infosec professional which incorporates elements of community discussion and writings from various security practitioners throughout the industry.  It is truly an honor to have been invited to participate and contribute in this forum alongside many well respected writers and colleagues.

The following articles are now available:

The Misconceptions of Sidejacking with Firesheep

Reality Check: Traditional Perimeter Security is Dead

Man in the Middle (MITM) Attacks Explained 

Several more articles are in the works and in addition I am working on a very special analysis of the XerXes attack tool used in the attacks against Wikileaks which will be released sometime next week.

New Cyberspace Bill Proposed to Combat DDoS and Other Attacks

Responding to the firestorm of attacks being launched against Visa, Mastercard, Paypal, and other major institutions, various members of the US Government continue to press for dramatic legislation that would put the pulse of dealing with Cyberspace policy squarely within the White House.

All of this started with the infamous “Cablegate” incident on November 28th, 2010 when Wikileaks began releasing a large number of private communiqués belonging to members of the US State Department.

A hacktivist known as the Jester launched an application layer attack successfully targeting Wikileaks and bringing it down indefinitely. In addition, Amazon, Visa, Mastercard and several other organizations began to sever relationships with Wikileaks leaving them little options for successfully continuing operations. Retaliation ensued and a group known as Anonymous Operations likewise launched their own DDoS attacks against these companies in retribution for supporting the censorship of Wikileaks.

In response to this recent spate of cyber attacks, Senator Tom Carper from Delaware released a press release calling for more protections to people and companies operating on the Internet. Carper, a key author of the legislation, along with Senators Joe Lieberman of Connecticut and Susan Collins of Maine, is calling for the government and the private sector to begin working together more closely to address the growing threats faced by all organizations conducting business online.

PRESS RELEASE:

WASHINGTON – Today, Sen. Tom Carper (D-Del.) released the following statement in response to the retaliatory cyber attacks on the websites of select companies and organizations recently involved with Wikileaks, including MasterCard and Visa:

“Today’s events, once again, underscore the necessity for more robust cybersecurity efforts in the United States. Time and time again, hackers have demonstrated their ruthless yet effective techniques to attack critical cyber networks, and today they used those sophisticated techniques to bring down two financial giants in MasterCard and Visa. Whether it’s cyber crime or cyber terrorism, clearly the United States needs effective leadership from the federal government to successfully combat these kinds of attacks and mitigate the damage. Legislation I’ve authored along with Senators Joe Lieberman (ID-Conn.) and Susan Collins (R-Maine), Protecting Cyberspace as a National Asset Act of 2010, does just that. This bipartisan bill is a vital tool that America needs to better protect cyber space. It encourages the government and the private sector to work together to address this growing threat and provides the tools and resources for America to be successful in this critical effort.”

The Protecting Cyberspace as a National Asset Act of 2010 would create an Office of Cyber Policy in the White House with a director accountable to the public who would lead all federal cyberspace efforts and devise national cyberspace strategy. A National Center for Cybersecurity and Communications within the Department of Homeland Security, also led by a director accountable to the public, would enforce cybersecurity policies throughout the government and the private sector. The bill would also establish a public/private partnership to set national cyber security priorities and improve national cyber security defenses.

Sources:WGMD, InfoSecIsland

Man in the Middle (MITM) Attacks Explained: ARP Poisoining

It’s been over 3 weeks since Firesheep was released, and yet still there seem to be so many misconceptions about this particular vulnerability.  The most prevalent of these misconceptions is that HTTP Session Hijacking, also known as “sidejacking” is something which is limited to only wireless networks.  And this belief is not limited to just session hijacking attacks.  Somewhere along the way a myth was propagated that wired switched networks are somehow impervious to attacks like these and other similar types of attacks because of the use of collision domains and the inability of an attacker to have unfettered access to the Layer 2 medium.  As I mentioned in my previous article on the Misconceptions About Sidejacking with Firesheep, attacks like these and others are not relegated to strictly wireless networks, and in fact there are many so-called Man-in-the-Middle (MITM) attacks which can be performed on a switched wired network to compromise the imaginary security of a Layer 2 collision domain.

Today we will kick off a series of articles in which we will discuss the different forms of MITM attacks.  MITM attacks come in many forms and essentially allow an attacker to act as a proxy between the victim and any host the victim has established connections with.  It is a form of active eavesdropping in which the attacker is controlling the conversation unbeknownst to the victim.

In this first part, we will discuss and outline a method whereby the Layer 2 network can be compromised, giving an attacker the ability to circumvent the limited protections of a collision domain and thereby observe traffic for which they would normally not have access to.  This is accomplished through an attack known as an ARP Poisoning attack, which is also commonly referred to as an ARP Spoofing attack.

Recall that in switched networks, unique collisions domain are created between each host and the switch.  This is unlike a hub which utilizes a single collision domain for all connected ports.  The primary reason for this behavior in switched networks is to reduce unintended collisions on the medium, however it has an added benefit in that it makes the network more secure from prying eyes.  In a switched network, traffic from one host to another is normally inspected by the switch, and the traffic is only forwarded to the host on the port whose destination MAC address matches that specified in the Ethernet header.  An obvious exception to this behavior is broadcast traffic, which is normally flooded to all ports in the broadcast domain (this normally means all ports on the switch, or if VLANs are being used, all ports in the same VLAN).  In unicast conversations however, sniffing traffic is much more difficult because the traffic is directed towards the appropriate destination and is not seen by hosts which are not part of the conversation.

Now let’s take a look at how the Layer 2 network can be subjugated via an ARP Poisoning attack.  An ARP Poisoning attack can be used to subjugate traffic staying local to the LAN as well as traffic that is going to a remote network.  In this scenario, we will describe traffic that is going off-net, towards some destination off the local network such as an Internet destination.  Under normal circumstances, a host comes online and attempts to associate a Layer 2 MAC address with the Layer 3 address of the default gateway.  The ARP Request is flooded to all ports on the network so that all devices have an opportunity to inspect the ARP Request to determine if their Layer 3 address is the one which is being requested.  In a normal scenario, the device which owns the Layer 3 address will respond with an ARP Reply and indicate the Layer 2 MAC address to associate with that Layer 3 address.

In the example below, we see the host 192.168.1.106 coming online and issuing an ARP Request for the MAC Address of the default gateway 192.168.1.1.  This ARP Request is broadcasted to all hosts on the network.  In this example, the network has not been compromised yet, so the proper ARP Response is issued by the router and sent towards the requesting host.  Through this mechanism, the requesting host now associates the Layer 2 MAC Address 00:11:45:02:0A:68 with the Layer 3 IP Address 192.168.1.1.

ARP_Spoofing1

Now that we’ve seen how ARP works in a normal scenario, let’s take a quick look at how a Layer 2 switched network could be compromised, and force all off-net traffic to go through an intermediary device, in this case, the attackers machine.  By using a tool such as Ettercap, we can easily spoof these ARP messages across the local network, and fool unsuspecting devices into routing traffic towards the attacker’s machine.

First, we’ll spoof our own MAC address before launching the attack, because this is really the only fingerprint which might otherwise indicate the source of this attack.  A savvy attacker would therefore likely spoof his own MAC address to evade detection.  With the following command, we spoof our MAC address to 00:de:ad:be:ef:01:

BackTrackVM:~ sfouant$ sudo ifconfig en1 ether 00:de:ad:be:ef:01

Now that we’ve spoofed our MAC address, we can move on to the second phase of this attack using the ARP Poisoning tool Ettercap.  The following command will launch an ARP poisoning attack on the en1 interface and essentially poison all hosts on the local network by informing them that the gateway address 192.168.1.1 should now be associated with the MAC address of the attacking host (00:de:ad:be:ef:01).  Please note, a detailed analysis of Ettercap is beyond the scope of this article and interested readers are encouraged to read Irongeek’s excellent tutorial on sniffing.

BackTrackVM:~ sfouant$ sudo ettercap -i en1 -T -q -M ARP /192.168.1.1/ // Password:

ettercap NG-0.7.3 copyright 2001-2004 ALoR & NaGA

Listening on en1... (Ethernet)

   en1 -> 00:DE:AD:BE:EF:01      192.168.1.32       255.255.255.0

SSL dissection needs a valid 'redir_command_on' script in the etter.conf file Privileges dropped to UID 65534 GID 65534...

   0 plugins (disabled by configure...)   39 protocol dissectors   53 ports monitored 7587 mac vendor fingerprint 1698 tcp OS fingerprint 2183 known services

Randomizing 255 hosts for scanning... Scanning the whole netmask for 255 hosts... * |==================================================>| 100.00 %

37 hosts added to the hosts list...

ARP poisoning victims:

 GROUP 1 : 192.168.1.1 00:11:45:02:0A:68
 GROUP 2 : ANY (all the hosts in the list)
Starting Unified sniffing...

Text only Interface activated... Hit 'h' for inline help

Essentially, what we’ve done here is to use Ettercap to send out gratuitous ARPs to all hosts on the network.  Notice that in order for us to observe the full conversation, we must poison not only the victim, but also the router as well.  In the diagram below, you can see that the attacker has sent an ARP Reply to both the router as well as to the victim.  In the case of the ARP Reply to the router, we are telling the router that traffic destined towards the victim should now be sent to the attacker.  Similarly, in the case of the ARP Reply to the victim, we are telling the victim that traffic destined towards the router should likewise be sent to the attacker as well.

ARP_Spoofing2

Here are the packet captures from the wire so we can see exactly what is taking place.  We see the attacker’s machine sending an ARP Reply to poison the ARP cache of both the victim’s machine as well as the router.

00:40:23.486347 00:de:ad:be:ef:01 > 00:21:6a:6c:9f:ee, ethertype ARP (0x0806), IPv4 (len 4), Reply 192.168.1.1 is-at 00:de:ad:be:ef:01

00:40:23.496628 00:de:ad:be:ef:01 > 00:11:45:02:0a:68, ethertype ARP (0x0806), IPv4 (len 4), Reply 192.168.1.106 is-at 00:de:ad:be:ef:01

And that’s pretty much all there is to it… once the ARP cache of the unsuspecting victims have been poisoned, the attacker has full reign to sniff any non-encrypted traffic to or from those victim machines, in a manner no different than putting a physical tap on the wire.  Attackers may use ARP Poisoning as part of a larger attack, for example, Firesheep could now be run to perform HTTP Session Hijacking when victims visit unsecured websites such as Facebook or Twitter.

The interesting thing to note is that the attacker’s machine has to keep track of a few details in order to make sure it can reach devices on the local network via their actual MAC address.  This is required because the attacker needs to proxy the communication between both the victim as well as the router.   If the attacker’s machine were itself to be poisoned, the attack would not be successful.  Therefore it is imperative that the ARP cache remains intact on the attacker’s machine.  Here we see the actual ARP entries stored on the attacker’s machine.

BackTrackVM:~ sfouant$ netstat -rn | grep default
default 192.168.1.1 UGSc 32 139815 en1

BackTrackVM:~ sfouant$ arp 192.168.1.1
? (192.168.1.1) at 0:11:45:2:a:68 on en1 ifscope [ethernet]

BackTrackVM:~ sfouant$ arp 192.168.1.106
? (192.168.1.106) at 0:21:6a:6c:9f:ee on en1 ifscope [ethernet]

Now that we’ve illustrated how ARP Poisoning works and illustrated it’s usage, you are probably interested to know how we can prevent this type of behavior on a local area network. While there are no foolproof mechanisms which can completely eliminate this threat in existence today, there are several things which can be done to monitor the network and limit what an attacker can do.

The first of these is to use some type of passive monitoring on the network, for example, something along the lines of arpwatch.  Arpwatch is a tool that monitors ARP traffic on a network and generates a log of observed IP to MAC address pairings.  This could be used after the fact to do forensic analysis, or it could be coupled with real-time triggered alerts in the event that something important like the supposed default gateway is observed sending out a new MAC address pairing.

Static ARP entries are another option but in practice this typically proves much too cumbersome to be effective, especially on large network segments with mobile users.

Different switch vendors have also incorporated different countermeasures and methods to protect this type of behavior on their products.  For example, Juniper Networks EX Series Ethernet Switches have a mechanism called Dynamic ARP Inspection, or DAI for short.  DAI tries to prevent ARP spoofing by intercepting ARP packets on untrusted ports and validates them against a DHCP snooping database.  This means that in order to be effective, you must employ DHCP for address assignment in your network.  That being said, often the locations where you want to prevent ARP Poisoning are going to be precisely those locations that employ the use of DHCP, so in practice this is not much of a negating factor. 

DAI will check to see if the source MAC address of an ARP packet matches a valid entry in the DHCP snooping database, and if no entry exists, the packet is dropped.  What this means is that a host must get an address assigned via DHCP before it has any opportunity to send any type of ARP packet onto the network.  It’s not foolproof but it is definitely a better option than simply monitoring the network.

[edit ethernet-switching-options]
sfouant@switch# show
secure-access-port {
    interface interface-name {
        dhcp-trusted;
    }
    vlan vlan-name {
        arp-inspection;
        examine-dhcp;
    }
}

Juniper also has a mechanism known as MAC limiting which prevents MAC spoofing by limiting the number of MAC addresses learned on a given access port.  It can also be configured with an explicitly defined allowed MAC addresses for a given access port.  As we can see in the configuration below, we can define a specific MAC address to be allowed on a given interface using the ‘allowed-mac’ statement, or we can limit the number of learned addresses on a particular port through the use of the ‘mac-limit’ statement.  We can also choose to generate an error log, drop the offending packets, or shut the port down when there is a violation of our MAC limiting policy.

[edit ethernet-switching-options]
sfouant@switch# show
secure-access-port {
    interface ge-0/0/2.0 {
        allowed-mac [ 00:1b:c0:5e:53:a0 00:1b:c0:5e:53:a4 ];
    }
    interface ge-0/0/3.0 {
        mac-limit 1 action log;
    }
    interface ge-0/0/4.0 {
        mac-limit 2 action drop;
    }
    interface ge-0/0/5.0 {
        mac-limit 3 action shutdown;
    }
}

None of the above methods are completely foolproof but can certainly raise the bar for an attacker wanting to cause havoc on the local network.  One development which shows promise is a relatively new standard known as IEEE 802.1AE (also known as MACsec) which provides source authentication, data integrity, and encryption of devices on a Layer 2 network.  Not only would this encrypt sensitive data on the so-called “last-mile”, it will also prevent various man-in-the-middle attacks as well as passive wiretapping attempts.  Although MACsec does show promise, adoption by the vendor community has been slow and there aren’t any practical implementations of the technology that are immediately available for widespread deployment.

All in all, Layer 2 switched networks are not nearly as secure as many seem to think, and subjugating the network can be accomplished relatively easy by anyone with even the most basic of computer skills.  Ettercap is one such tool that easily allows a Layer 2 switched network to be compromised by poisoning the ARP cache of unsuspecting victims.  My hope is that this article might help raise awareness of the issues inherent in Layer 2 switched networks, and illustrate just how susceptible they are to compromise.  Please note that I do not condone attacks of this nature and have only written this article to raise awareness.

The Misconceptions of Sidejacking with Firesheep

Unless you’ve been hiding under a rock for the past few days, you are probably well aware of the recent activity around a new Firefox extension developed by a pair of researchers that brings the issue of session hijacking front and center.  The duo behind this extension, Eric Butler and Ian “craSH” Gallagher, developed the software in order to demonstrate the vulnerabilities inherent in many web sites that don’t fully implement encryption.  The browser extension, dubbed “Firesheep“, essentially enables an attacker to grab other people’s credentials and use them to gain access to various web sites.

As Butler explains on his blog, “When logging into a Website you usually start by submitting your username and password.  The server then checks to see if an account matching this information exists and if so, replies back to you with a “cookie” which is used by your browser for all subsequent requests”.

While it is very common for many websites to encrypt your initial logon and protect your password, that is typically where the protection stops.  In other words, they’ll encrypt the initial login, but then leave the rest of the traffic unencrypted.  Since the rest of the communication is unencrypted, the user’s cookie is left vulnerable.  Through a technique known as HTTP session hijacking (also known as “sidejacking”), an attacker gains access to the user’s cookie and can then access the resources of that website while masquerading as that user.

What Firesheep does is essentially snoop the traffic on the network for these important cookies and allows an attacker to log into sites such as Twitter and Facebook with the simple click of a mouse.  Once they have the cookie, they basically have full reign over the account and are pretty much free to do whatever they want.

Now there seems to be a great deal of misinformation spreading pretty virulently about this particular issue, so I’d like to take a moment to set the record straight so that everyone will understand the true nature of this vulnerability.

First off, this is not an issue in which only wireless networks are vulnerable.  Surprisingly enough, there are some pretty respectable names in the industry propagating this misinformation – everybody from the infamous Bruce Schneier to Brian Krebs, and even Eric Butler himself.  Perhaps the reasons for these misconceptions are because it is just so much easier to exploit this vulnerability on wireless networks or perhaps it’s because many people are simply not aware that switched networks can be compromised as well.

Whatever the reason may be, the fact of the matter is that this vulnerability can be exploited on both wired AND wireless networks.  On wired networks it does take quite a bit of extra work, which might explain the widespread belief that this is something only prevalent on wireless networks.  Wired networks can be subjugated through a technique known as ‘ARP Spoofing’ which can be done with a wide variety of tools such as Dsniff or Ettercap.

Second, this particular vulnerability doesn’t just affect OPEN wireless networks.  In fact, any wireless network in which the wireless encryption key has been compromised can fall victim to this attack.  If you think that WEP encryption offers any protection against this you would be wrong.  Once an attacker has subjugated the WEP key, your HTTP sessions could still be compromised.

Finally, there seems to be an impression by a majority of people that this is some type of new exploit.  In fact HTTP Session Hijacking is not a new attack vector and has been known to exist since 2004.  The author’s have merely streamlined the process by incorporating it into a simple browser plug-in which makes it simple for anyone with even the most basic networking knowledge to accomplish.  Without tools like this, an attacker would need to sniff the traffic, parse through it to grab the HTTP session keys, and understand how to bring those session keys into the web browser to complete the exploit.

So now that we’ve managed to clear the air and hopefully eliminate some of the false information that is circulating, what can you do to secure your traffic?  The only truly effective fix for these problems are for web sites to implement full end-to-end encryption using HTTPS, but that doesn’t seem to be happening anytime soon.  In the mean time, if you are a Firefox user, a fantastic plug-in called HTTPS Everywhere will rewrite all your requests to HTTPS, thereby encrypting your communications (Ahem… you are using Firefox aren’t you?).  If for some reason you just can’t seem to get away from using Internet Explorer, you can still encrypt your communications using an application like TOR which encrypts the data between your computer and any one of a number of TOR exit nodes.

By the way, in case you are interested, Firesheep is named after the famous Wall of Sheep at Defcon, which displays selected details of unencrypted logins and other sessions over the event’s Wi-Fi network from people who, by attending Defcon, should know better than to ever send anything unencrypted over a public Wi-Fi network.