ShortestPathFirst Network Architecture and Design, and Information Security Best Practices


Carrier Grade NAT and the DoS Consequences

Written by Stefan Fouant

Republished from Corero DDoS Blog:

The Internet has a very long history of utilizing mechanisms that may breathe new life into older technologies, stretching it out so that newer technologies may be delayed or obviated altogether. IPv4 addressing, and the well known depletion associated with it, is one such area that has seen a plethora of mechanisms employed in order to give it more shelf life.

In the early 90s, the IETF gave us Classless Inter-Domain Routing (CIDR), which dramatically slowed the growth of global Internet routing tables and delayed the inevitable IPv4 address depletion. Later came DHCP, another protocol which assisted via the use of short term allocation of addresses which would be given back to the provider's pool after use. In 1996, the IETF was back at it again, creating RFC 1918 private addressing, so that networks could utilize private addresses that didn't come from the global pool. Utilizing private address space gave network operators a much larger pool to use internally than would otherwise have been available if utilizing globally assigned address space -- but if they wanted to connect to the global Internet, they needed something to translate those addresses. This is what necessitated the development of Network Address Translation (NAT).

NAT worked very well for many, many years, and slowed the address depletion a great deal. But in order to perform that translation, you still needed to aquire at least one globally addressable IP. As such, this only served to slow down depletion but not prevent it - carriers were still required to provide that globally addressable IP from their own address space. With the explosive growth of the Internet of Things, carriers likewise began to run out of address space to allocate.

NAT came to the rescue again. Carriers took notice of the success of NAT in enterprise environments and wanted to do this within their own networks, after all, if it worked for customers it should likewise work for the carriers. This prompted the IETF to develop Carrier Grade NAT (CGN), also known as Large Scale NAT (LSN). CGN aims to provide a similar solution for carriers by obviating the need for allocating publicly available address space to their customers. By deploying CGN, carriers could oversubscribe their pool of global IPv4 addresses while still providing for seamless connectivity, i.e. no truck-roll.

So while the world is spared from address depletion yet again, the use of CGN technologies opens a new can of worms for carriers. No longer does one globally routable IP represent a single enterprise or customer - due to the huge oversubscription which is afforded through CGN, an IP can service potentially thousands of customers.

This brings us to the cross-roads of the Denial of Service (DoS) problem. In the past, when a single global IP represented only one customer network, there was typically no collateral damage to other customer networks. If the DoS was large enough to impact the carrier's network or if there was collateral damage, they would simply blackhole that customer IP to prevent it from transiting their network. However, with CGN deployments, and potentially thousands of customers being represented by a single IP, blackhole routing is no longer an option.

CGN deployments are vulnerable to DoS in a few different ways. The main issue with CGN is that it must maintain a stateful record of the translations between external addresses and ports with internal addresses and ports. A device which has to maintain these stateful tables is vulnerable to any type of DoS activity that may exhaust the stateful resources. As such, a CGN device may be impacted in both the inbound and the outbound direction. An outbound attack is usually the result of malware on a customers machine, sending a large amount of traffic towards the Internet and consuming the state tables in the CGN. Inbound attacks usually target a particular customer, and take the form of a DoS attack, or a Distributed Denial of Service (DDoS) attack. Regardless of the direction of the attack, a large amount of resources are consumed in the CGN state table, which reduces overall port availability. Left unregulated, these attacks can easily cause impact not only to the intended victim, but potentially the thousands of other customers being serviced by that CGN.

With the inability to simply blackhole a given IP using edge Access Control Lists (ACLs), carriers must look at other options for protecting their customer base. While some CGN implementations have the ability to limit the amount of ports that are allocated to a single customer, these only work in discrete cases and can be difficult to manage. They also do not protect customers if the CGN device is itself the target of the attack.

The solution to this problem is the use of a purpose-built DDoS mitigation device, or what is more commonly referred to as a "scrubbing" device in IT circles. Dedicated DDoS mitigation devices attempt to enforce that everyone plays nicely, by limiting the maximum number of sessions to or from a given customer. This is done by thorough analysis of the traffic in flight and rate-limiting or filtering traffic through sophisticated mitigation mechanisms to ensure fairness of the public IP and port availability across all customers. Through the use of dedicated DDoS mitigation devices, CGN devices and their associated customers are protected from service disruptions, while still ensuring legitimate traffic is allowed unencumbered. Lastly, another important aspect of DDoS mitigation technology is that they tend to be "bumps in a wire", that is to say, they don't have an IP address assigned to them and as such cannot be the target of an attack.


Is DDoS Mitigation as-a-Service Becoming a Defacto Offering for Providers?

Written by Stefan Fouant

Republished from Corero DDoS Blog:

It’s well known in the industry that DDoS attacks are becoming more frequent and increasingly debilitating, turning DDoS mitigation into a mission critical initiative. From the largest of carriers to small and mid-level enterprises, more and more Internet connected businesses are becoming a target of DDoS attacks. What was once a problem that only a select few dealt with is now becoming a regularly occurring burden faced by network operators.

In my daily engagements with various customers of all shapes and sizes, it’s truly interesting to see how the approach to DDoS mitigation is changing. Much of this is the result of DDoS mitigation services shifting from a “nice to have” technology to a “must-have”, essential in order to maintain business continuity and availability.

When I built DDoS mitigation and detection services for Verizon back in 2004, the intent was to offer value-add revenue producing services to offer subscribers, in an effort to build out our security offerings. For many years, this concept was one that pretty much every provider I worked with was looking into; build a service with the intent of generating new revenue opportunity from customers when traditional avenues such as simple connectivity and bandwidth offerings were contracting.

However, in the past several months, as I interact with large scale carriers to data center hosting providers, I am seeing a common thread starting to emerge - that is, attracting new customers and retaining existing ones is becoming more difficult in the absence of differentiated value. Compounding this issue is that the customers are starting to expect some of these services as part of their connectivity fees.  What I’m seeing is more and more providers investigating the option of offering DDoS mitigation services to their customers as a virtue of being connected to them, in an effort to attract them away from other providers who have limited service offerings and capabilities.

Could it be that DDoS mitigation services become a standard offering on a provider’s network? Is it feasible that at some point in the future DDoS mitigation will become an inherent capability provided by the service providers?

In order for this approach to become a reality, the economics of the game have to change. Inserting DDoS mitigation elements into the network need to be reasonably inexpensive in order for carriers and hosting providers to justify the cost. The technology also needs to be simple and as close to automatic as possible, as an inherent service offering will not justify the huge expense and uplift of having a team of operations personnel managing the service. Attacks need to be mitigated dynamically and quickly and without the need for manual intervention or the requirement to have to pick up a phone to get assistance. And lastly, whatever mechanisms are in place need to ensure a “do no harm” approach and that there is no collateral damage to good traffic.

At Corero, we believe that we are doing just that; changing not only the economics of the game, but also by fundamentally looking at the problem is a different way. Corero enables real-time, algorithmic identification of network anomalies and subsequent mitigation of the attack traffic, eliminating the DDoS challenge before attacks transit the network and ultimately impact downstream customers.

This concept is realized through, dynamic mitigation bandwidth licensing - a new economic model which represents the use of high scalable DDoS mitigation technology. The ability to modernize DDoS protection, specifically taking advantage of always-on DDoS mitigation through emerging and proven deployment models; such as dedicated in-line deployment of appliance based DDoS mitigation at the peering and transit points is becoming a more common practice with the help of Corero Network Security.


New Cyberspace Bill Proposed to Combat DDoS and Other Attacks

Written by Stefan Fouant

Responding to the firestorm of attacks being launched against Visa, Mastercard, Paypal, and other major institutions, various members of the US Government continue to press for dramatic legislation that would put the pulse of dealing with Cyberspace policy squarely within the White House.

All of this started with the infamous "Cablegate" incident on November 28th, 2010 when Wikileaks began releasing a large number of private communiqués belonging to members of the US State Department.

A hacktivist known as the Jester launched an application layer attack successfully targeting Wikileaks and bringing it down indefinitely. In addition, Amazon, Visa, Mastercard and several other organizations began to sever relationships with Wikileaks leaving them little options for successfully continuing operations. Retaliation ensued and a group known as Anonymous Operations likewise launched their own DDoS attacks against these companies in retribution for supporting the censorship of Wikileaks.

In response to this recent spate of cyber attacks, Senator Tom Carper from Delaware released a press release calling for more protections to people and companies operating on the Internet. Carper, a key author of the legislation, along with Senators Joe Lieberman of Connecticut and Susan Collins of Maine, is calling for the government and the private sector to begin working together more closely to address the growing threats faced by all organizations conducting business online.


WASHINGTON – Today, Sen. Tom Carper (D-Del.) released the following statement in response to the retaliatory cyber attacks on the websites of select companies and organizations recently involved with Wikileaks, including MasterCard and Visa:

“Today’s events, once again, underscore the necessity for more robust cybersecurity efforts in the United States. Time and time again, hackers have demonstrated their ruthless yet effective techniques to attack critical cyber networks, and today they used those sophisticated techniques to bring down two financial giants in MasterCard and Visa. Whether it’s cyber crime or cyber terrorism, clearly the United States needs effective leadership from the federal government to successfully combat these kinds of attacks and mitigate the damage. Legislation I’ve authored along with Senators Joe Lieberman (ID-Conn.) and Susan Collins (R-Maine), Protecting Cyberspace as a National Asset Act of 2010, does just that. This bipartisan bill is a vital tool that America needs to better protect cyber space. It encourages the government and the private sector to work together to address this growing threat and provides the tools and resources for America to be successful in this critical effort.”

The Protecting Cyberspace as a National Asset Act of 2010 would create an Office of Cyber Policy in the White House with a director accountable to the public who would lead all federal cyberspace efforts and devise national cyberspace strategy. A National Center for Cybersecurity and Communications within the Department of Homeland Security, also led by a director accountable to the public, would enforce cybersecurity policies throughout the government and the private sector. The bill would also establish a public/private partnership to set national cyber security priorities and improve national cyber security defenses.

Sources:WGMD, InfoSecIsland


CyberWarfare – Defending the Electronic Frontier in the 21st Century and Beyond

Written by Stefan Fouant

During the reign of the Roman Empire, it was said that all roads led to Rome.  While these roads facilitated free-trade and were essential to the expansive growth of the Roman Empire, they also introduced a double-edged sword by creating convenient new avenues that could easily be maneuvered by the Empire's enemies against its best interests.

It could be said that similar corollaries and conclusions could be drawn to that of the Internet, a project initially developed by the United States through the Advanced Research Projects Agency (ARPA or DARPA) in the late 20th century.  As we continue to move into the 21st century, the Internet is emerging as the new battlefield on the International stage.  As Internet connectivity becomes commoditized and the barrier to entry is further reduced, it becomes easier for potential adversaries of the United States to wield the power of the Internet to launch attacks against the US infrastructure and its National interests, disrupting the flow of information and leaving destruction in its wake.  It is becoming increasingly possible for our adversaries to not only cause extraordinary economic havoc, but also loss of life as critical services such as E911 and other emergency services become more dependent upon the Internet.  At the same time a perfect storm is brewing because the resources currently allocated to preparing ourselves for this advancing threat are stretched to the limit and largely focused on obsolete technologies that are considerably out of date.

On August 10th, 2010, in McLean, VA, a series of panelists will discuss this ever-increasing reality and the potential that wars in the future won't be fought so much on the traditional battlefield but rather electronically targeting critical components of a national infrastructure.  The panelists will discuss what has been done by the US Government to secure certain core components of our national infrastructure, what remains to be done, and will also serve as a "call to arms" to better secure our national cyber landscape.  The members of this panel have all been intimately involved within the Networking Security industry for many years and are engaged in the implementation of their solutions at the ground level.  Their insights will provide invaluable viewpoints regarding this very real and emerging threat and will provide invaluable experience to the attendees of the presentation.

For more information, please take a look at the Event details at  This event will be hosted by the Capital Technology Management Hub and will be moderated by Stefan Fouant.


An Overview of BGP FlowSpec

Written by Stefan Fouant

I have given this presentation a few times in the last year and was asked to make this available for public consumption. Essentially, this is a brief overview of RFC 5575, entitled “Dissemination of Flow Specification Rules”, written by Danny McPherson, Jared Mauch, and others. This standard had somewhat of a rocky beginning as there was limited vendor support, but as of recently it appears to have picked up quite a bit of steam with Cisco announcing support for the protocol in the very near future. The benefit of BGP Flow Spec is that it allows BGP speakers to use a new BGP NLRI defining flow filter information which can then be advertised to upsteam neighbors via BGP. The primary and immediate motivation of this protocol is to provide intra and inter provider distribution of traffic filtering rules to filter DoS and DDoS attacks, however it can be used for a wide variety of applications in which filtering information must be dynamically distributed throughout a network. I will probably make additional modifications to these slides as the protocol gains more significant foothold throughout the vendor community and as Service Providers gain more practical deployment experience. As with my other presentations, I will eventually add a voice-over to turn this into a slide-cast.


Hardening DNS Against Reflection Attacks

Written by Stefan Fouant

Generally, the most prevalent types of attacks and the ones which are most likely to target a DNS infrastructure are reflection/amplification attacks (which generally use a DNS servers resources against other third-parties).  Understanding the attack-vector employed in most reflection attacks is necessary in order to understand how to harden an environment against these types of attacks.

In a typical DNS reflection attack, attackers send a large number of queries from a spoofed source address.  The address which is spoofed is typically that of the victim.  When the requests are received by the nameserver, all ensuing responses to these queries are directed back towards the spoofed IP of the victim.  The amplification factor comes into play in these types of attacks because the attacker will typically query for a record of a large size, typically that of a root record which by it's very nature is very large in size.  By simply sending in small queries, on the order of around 90 Bytes, an attacker can typically get a multiplication factor of five times that of the original query.  This allows an attacker to use a smaller number of hosts in the botnet and cause considerably more impact than would otherwise be possible if these devices were sending traffic directly towards the victim.

Due to the fact that the purpose of this attack is to reflect packets back towards the victim, all of the source IPs of the DNS queries contain the same address.  This makes it fairly easy to spot a reflection attack utilizing your infrastructure.  A simple observation of a spike in DNS queries from a singular IP is a clear indication that this is going on.  One would think that these types of attacks can be mitigated fairly easily by implementing simple ACLs on routers to prevent the incoming queries from those spoofed IP, and in fact that is a method commonly used by network administrators to protect against these types of attacks.  However, most security experts suggest that the filtering should actually be implemented on the nameserver itself - in fact this has been considered an industry best practice for quite some time now.  The reason for this is that implementing an ACLs on a router is largely a reactive countermeasure which can only be deployed after the fact.  An administrator will still need to identify the target of the attack before filters can be put in place; furthermore these types of filters only serve to cause a legitimate Denial of Service when that particular victim actually attempts to query anything for which the nameserver is actually authoritative for.  As an alternative to ACLs, some proposed configurations below can be used to eliminate this problem in it's entirety.

At the very onset of your investigation into the vulnerabilities of your DNS infrastructure and potential remedies one of the very first things a network administrator must determine is if the nameservers allow anyone on the outside world to query for root (.).  Using the example below, one can easily check to see if their nameserver responds with a root-referral when queried for root.  If you see something along these lines, you can be fairly certain your nameserver is responding with a root-referral:

/usr/bin/dig . NS
; <<>> DiG 9.2.4 <<>> . NS @DNS.EXAMPLE.COM ; (2 servers found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54368 ;; flags: qr rd; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 13
;. IN NS
;; Query time: 43 msec
;; WHEN: Thu Nov 12 17:01:51 2009
;; MSG SIZE rcvd: 436

Normally, most Internet-facing authoritative nameservers should not respond to recursive third-party queries for root.  If an authoritative nameserver responds to queries for root with a root-referral, attackers will likely use your nameservers as an amplification vector to launch attacks.  It is better to remove the temptation altogether, by not allowing these in the first place.  Furthermore, instead of responding with the root-referral, nameservers should be configured to respond with REFUSED or SERVFAIL or another similar type of message.  The reason for this is simple - a REFUSED message is only on the order of around 50 Bytes.  If a countermeasure such as this is not employed, attackers can send in a relatively small spoofed query and will get roughly a 512 Byte response.  However, if we respond with a REFUSED message, the amplification factor is on the order of 1:1.  From an efficiency standpoint this does not provide much if any amplification, and therefore the attackers will simply look for other providers whom don’t apply such stringent security measures.

NOTE: Of course if you are in the business of providing recursive DNS service, that is an entirely different story - if that is the case, network administrators should take extra precautions by strictly enabling this function on the Recursive DNS servers (not the Authoritatives) and combining firewall-filters to limit recursive service to only IP blocks of paying customers.
While we're on the subject, another current best practice in the industry is to apply a similar methodology to requests for records for which a given DNS Server is not authoritative for.  Some resolvers may respond to these types of requests by providing a root-referral, and in the worst cases a resolver may actually perform a recursive query on behalf of the original source.  An authoritative resolver should respond to any non-existent domain requests with either the REFUSED message or other similar type of message, rather than providing a root referral or performing a recursive query.  In fact, BCP 140 (Preventing Use of Recursive Nameservers in Reflector Attacks, RFC 5358) looked at this problem and concluded that sending REFUSED was the best general guidance that can be given.  While BCP 140 applies guidance to recursive servers, returning REFUSED to queries which are not within the namespace served by authoritative servers is entirely consistent with BCP 140.  You can generally find out if your nameservers allows for recursion or if it responds with a root-referral, by using a command such as the following:

/usr/bin/dig +recurs @yournameserver_ip

If you see a response which looks like the large root-referral response above, or some other type of response other than a REFUSED or SERVFAIL, you should take steps to harden your resolver.  One can also look for an "RA" entry in the "Flags" section of the DNS response which should give you some indication as to whether the resolver allows for recursion.

In conclusion, there are several such steps which allow administrators to prevent from being used as an unwitting pawn in an attack against third-parties.  Firewall filters are effective, but are reactive in nature.  Therefore, it is recommended to follow some of the steps below in order to effectively harden your DNS infrastructure against reflection attacks.

SYNOPSIS - Steps to prevent a DNS infrastructure from being used for reflection/amplification type attacks:

  1. Disable querying for root on Authoritative resolvers, return REFUSED
  2. Filter queries to Recursives from only paying customers, via ACLs
  3. Apply BCP 140 or similar rules to cause any non-existent domain queries to respond with a REFUSED.

An excellent whitepaper entitled "Anatomy of Recent DNS Reflector Attacks from the Victim and Reflector Point of View" by Ken Silva, Frank Scalzo and Piet Barber covers this topic in more detail.


Preventing DNS Fragmentation and Large DNS Packet Attacks

Written by Stefan Fouant

Often times, attackers will attempt to perform very rudimentary attacks against DNS resolvers in an attempt to cause a Denial of Service. It is not uncommon to see attackers crafting a DoS attack composed mostly of UDP packets destined to port 53 with invalid payloads containing the ‘more-fragments’ bit set. In some cases, the packets may contain the ‘non-more-fragments’ bit set with packets of specific lengths, typically larger than the average size of a normal DNS packet.

Many flow analysis tools and IDP products have the ability to look for IP fragmentation misuse based on parameters that the operator may set; these tools are invaluable as an early warning system to alert the network administrator that their infrastructure is under attack.

Insofar as being able to mitigate these types of attacks, a few simple approaches can be utilized by a network administrator in order to filter this type of traffic using simple ACLs or firewall filters on routers or other types of equipment capable of filtering at Layers 3 and 4. Normally, is is not typical to see DNS queries which are fragmented, therefore the following Juniper firewall-filter should effectively filter the fragmented packets:

term DNS-Fragments {
  from {
     destination-prefix-list {
     fragment-flags more-fragments;
     destination-port 53;
  then {
     count dns-fragments;


For packets containing the 'non-more-fragments’ bit set, or which all packets within the attack flows share a common packet size (typically this will be large, on the order of 540 bytes or larger), a network administrator can easily filter those on the router as well. Normally we should not expect to see queries of this large size, so the following could be effectively used to filter these types of attacks as well.  In this example we are filtering UDP or TCP packets destined to port 53, with a size of either 540 bytes or 1480:

term DNS-InvalidSize {
   from {
       destination-prefix-list {
       packet-length [ 540 1480 ];
       destination-port 53;
   then {
       count dns-InvalidSize;


NOTE: It is more than likely a network administrator would need to adjust the above packet sizes after analysis of the packet size used in the attack vector using whatever flow reporting or network visibility tools are in use, since it is unlikely an attacker would use the exact same packet sizes listed in the example above.


Danny McPherson talks with Verizon Business about Arbor’s 2007 Worldwide Infrastructure Security Report

Written by Stefan Fouant

Danny McPherson, Chief Security Officer at Arbor Networks, talks with Verizon Business’ Stefan Fouant about the complex set of challenges facing the service provider community.  What are the most pressing security issues facing providers today, and how do they plan to address them? Listen to this podcast to learn more.