ShortestPathFirst Network Architecture and Design, and Information Security Best Practices


Carrier Grade NAT and the DoS Consequences

Written by Stefan Fouant

Republished from Corero DDoS Blog:

The Internet has a very long history of utilizing mechanisms that may breathe new life into older technologies, stretching it out so that newer technologies may be delayed or obviated altogether. IPv4 addressing, and the well known depletion associated with it, is one such area that has seen a plethora of mechanisms employed in order to give it more shelf life.

In the early 90s, the IETF gave us Classless Inter-Domain Routing (CIDR), which dramatically slowed the growth of global Internet routing tables and delayed the inevitable IPv4 address depletion. Later came DHCP, another protocol which assisted via the use of short term allocation of addresses which would be given back to the provider's pool after use. In 1996, the IETF was back at it again, creating RFC 1918 private addressing, so that networks could utilize private addresses that didn't come from the global pool. Utilizing private address space gave network operators a much larger pool to use internally than would otherwise have been available if utilizing globally assigned address space -- but if they wanted to connect to the global Internet, they needed something to translate those addresses. This is what necessitated the development of Network Address Translation (NAT).

NAT worked very well for many, many years, and slowed the address depletion a great deal. But in order to perform that translation, you still needed to aquire at least one globally addressable IP. As such, this only served to slow down depletion but not prevent it - carriers were still required to provide that globally addressable IP from their own address space. With the explosive growth of the Internet of Things, carriers likewise began to run out of address space to allocate.

NAT came to the rescue again. Carriers took notice of the success of NAT in enterprise environments and wanted to do this within their own networks, after all, if it worked for customers it should likewise work for the carriers. This prompted the IETF to develop Carrier Grade NAT (CGN), also known as Large Scale NAT (LSN). CGN aims to provide a similar solution for carriers by obviating the need for allocating publicly available address space to their customers. By deploying CGN, carriers could oversubscribe their pool of global IPv4 addresses while still providing for seamless connectivity, i.e. no truck-roll.

So while the world is spared from address depletion yet again, the use of CGN technologies opens a new can of worms for carriers. No longer does one globally routable IP represent a single enterprise or customer - due to the huge oversubscription which is afforded through CGN, an IP can service potentially thousands of customers.

This brings us to the cross-roads of the Denial of Service (DoS) problem. In the past, when a single global IP represented only one customer network, there was typically no collateral damage to other customer networks. If the DoS was large enough to impact the carrier's network or if there was collateral damage, they would simply blackhole that customer IP to prevent it from transiting their network. However, with CGN deployments, and potentially thousands of customers being represented by a single IP, blackhole routing is no longer an option.

CGN deployments are vulnerable to DoS in a few different ways. The main issue with CGN is that it must maintain a stateful record of the translations between external addresses and ports with internal addresses and ports. A device which has to maintain these stateful tables is vulnerable to any type of DoS activity that may exhaust the stateful resources. As such, a CGN device may be impacted in both the inbound and the outbound direction. An outbound attack is usually the result of malware on a customers machine, sending a large amount of traffic towards the Internet and consuming the state tables in the CGN. Inbound attacks usually target a particular customer, and take the form of a DoS attack, or a Distributed Denial of Service (DDoS) attack. Regardless of the direction of the attack, a large amount of resources are consumed in the CGN state table, which reduces overall port availability. Left unregulated, these attacks can easily cause impact not only to the intended victim, but potentially the thousands of other customers being serviced by that CGN.

With the inability to simply blackhole a given IP using edge Access Control Lists (ACLs), carriers must look at other options for protecting their customer base. While some CGN implementations have the ability to limit the amount of ports that are allocated to a single customer, these only work in discrete cases and can be difficult to manage. They also do not protect customers if the CGN device is itself the target of the attack.

The solution to this problem is the use of a purpose-built DDoS mitigation device, or what is more commonly referred to as a "scrubbing" device in IT circles. Dedicated DDoS mitigation devices attempt to enforce that everyone plays nicely, by limiting the maximum number of sessions to or from a given customer. This is done by thorough analysis of the traffic in flight and rate-limiting or filtering traffic through sophisticated mitigation mechanisms to ensure fairness of the public IP and port availability across all customers. Through the use of dedicated DDoS mitigation devices, CGN devices and their associated customers are protected from service disruptions, while still ensuring legitimate traffic is allowed unencumbered. Lastly, another important aspect of DDoS mitigation technology is that they tend to be "bumps in a wire", that is to say, they don't have an IP address assigned to them and as such cannot be the target of an attack.


IETF Provides New Guidance on IPv6 End-Site Addressing

Written by Stefan Fouant

I've always been at odds with the recommendation in RFC 3177 towards allocating /48 IPv6 prefixes to end-sites.  To me this seemed rather short-sighted, akin to saying that 640K of memory should be enough for anybody.  It's essentially equivalent to giving out /12s in the IPv4 world which in this day and age might seem completely ridiculous, but let us not forget that in the early days of IPv4 it wasn't uncommon to get a /16 or even a /8 in some cases.

Granted, I know there are quite a few more usable bits in IPv6 than there are in IPv4, but allocating huge swaths of address space simply because it's there and we haven't thought of all the myriad ways it could be used in the future just seems outright wasteful.

So you can imagine my surprise and also my elation last week when the IETF published RFC 6177 entitled 'IPv6 Address Assignment to End Sites'.  In it, the general recommendation of allocating /48s to end-sites that has long been the defacto standard since the original publication of RFC 3177 in 2001 has finally been reversed.

It seems that sanity has finally prevailed and the IAB/IESG have decided to take a more pragmatic approach towards address allocation in IPv6.  The recommendations in RFC 6177 attempt to balance the conservation of IPv6 addresses while at the same time continuing to make it easy for IPv6 adopters to get the address space that they require without requiring complex renumbering and dealing with other scaling inefficiencies in the long term.  It is clear that acting too conservatively and allocating very small address spaces could act as a disincentive and possibly stifle widespread adoption of IPv6.

The new current recommendations for address allocations are as follows:

  • /48 in the general case, except for very large subscribers
  • /64 when it is known that one and only one subnet is needed by design
  • /128 when it is absolutely known that one and only one device is connecting

It goes on to state other recommendations and offers guidance to operators with regards to when to allocate certain prefix lengths.  But essentially, what this means is that now individual network operators have more options regarding which prefix size to allocate, and allows them to move away from strict general guidelines.  In essence, operators make the decision as to what prefix size to allocate based on an analysis of the needs of particular customers.

Perhaps this practical conservation may never be needed given the trillions of address space available in IPv6, but maybe, just maybe, in the very distant future if IPv6 is still in widespread use, it could very well be due to some of these recommendations being put in place today.  After all, 640K did turn out to be a rather small number didn't it?


IPv4 Address Exhaustion Causing Harmful Effects on the Earth

Written by Stefan Fouant

Today, I received a very disturbing email on NANOG which was forwarded from a recipient on the Global Environment Watch (GEW) mailing list.  If this is true, we all need to take steps to make an orderly and smooth transition to IPv6 as quickly as possible, lest we suffer from the harmful effects described in this email.

From: Stephen H. Inden
To: Global Environment Watch (GEW) mailing list
Date: Fri, 1 Apr 2011 00:19:08 +0200
Subject: IPv4 Address Exhaustion Effects on the Earth

At a ceremony held on February 3, 2011 the Internet Assigned Numbers Authority (IANA) allocated the remaining last five /8s of IPv4 address space to the Regional Internet Registries (RIRs). With this action, the free pool of available IPv4 addresses was completely depleted.

Since then, several scientists have been studying the effects of this massive IPv4 usage (now at its peak) on the Earth.

While measuring electromagnetic fields emanating from the world's largest IPv4 Tier-1 backbones, NASA scientists calculated how the IPv4 exhaustion is affecting the Earth's rotation, length of day and planet's shape.

Dr. Ron F. Stevens, of NASA's Goddard Space Flight Center, said all packet switching based communications have some effect on the Earth's rotation. It's just they are usually barely noticeable. Until now.

"Every packet affects the Earth's rotation, from a small ping to a huge multi-terabyte download.  The problem with IPv4 is its variable length header and tiny address space that can cause an electromagnetic unbalance on transmission lines.  The widespread adoption of Network Address Translation (NAT) on IPv4 networks is making the problem even worse, since it concentrates the electromagnetic unbalance.  This problem is not noticeable with IPv6 because of its fixed header size and bigger 128 bits address space", Dr. Stevens said.

Over the past few years, Dr. Stevens has been measuring the IPv4 growing effects in changing the Earth's rotation in both length of day, as well as gravitational field.  When IPv4 allocation reached its peak, last February, he found out that the length of day decreased by 2.128 microseconds.  The electromagnetic unbalance is also affecting the Earth's shape -- the Earth's oblateness (flattening on the top and bulging at the Equator) is decreasing by a small amount every year because of the increasing IPv4 usage.

The researcher concluded that IPv4 usage has reached its peak and is causing harmful effects on the Earth:

"IPv4 is, indeed, harmful.  Not only 32 bits for its address space has proven too small and prone to inadequate solutions like NAT, it is now clear that its electromagnetic effects on the Earth are real and measurable."

The solution?

"I'm convinced that the only permanent solution is to adopt IPv6 as fast as we can", says Dr. Stevens.


CyberWarfare – Defending the Electronic Frontier in the 21st Century and Beyond

Written by Stefan Fouant

During the reign of the Roman Empire, it was said that all roads led to Rome.  While these roads facilitated free-trade and were essential to the expansive growth of the Roman Empire, they also introduced a double-edged sword by creating convenient new avenues that could easily be maneuvered by the Empire's enemies against its best interests.

It could be said that similar corollaries and conclusions could be drawn to that of the Internet, a project initially developed by the United States through the Advanced Research Projects Agency (ARPA or DARPA) in the late 20th century.  As we continue to move into the 21st century, the Internet is emerging as the new battlefield on the International stage.  As Internet connectivity becomes commoditized and the barrier to entry is further reduced, it becomes easier for potential adversaries of the United States to wield the power of the Internet to launch attacks against the US infrastructure and its National interests, disrupting the flow of information and leaving destruction in its wake.  It is becoming increasingly possible for our adversaries to not only cause extraordinary economic havoc, but also loss of life as critical services such as E911 and other emergency services become more dependent upon the Internet.  At the same time a perfect storm is brewing because the resources currently allocated to preparing ourselves for this advancing threat are stretched to the limit and largely focused on obsolete technologies that are considerably out of date.

On August 10th, 2010, in McLean, VA, a series of panelists will discuss this ever-increasing reality and the potential that wars in the future won't be fought so much on the traditional battlefield but rather electronically targeting critical components of a national infrastructure.  The panelists will discuss what has been done by the US Government to secure certain core components of our national infrastructure, what remains to be done, and will also serve as a "call to arms" to better secure our national cyber landscape.  The members of this panel have all been intimately involved within the Networking Security industry for many years and are engaged in the implementation of their solutions at the ground level.  Their insights will provide invaluable viewpoints regarding this very real and emerging threat and will provide invaluable experience to the attendees of the presentation.

For more information, please take a look at the Event details at  This event will be hosted by the Capital Technology Management Hub and will be moderated by Stefan Fouant.


Book Review :: IS-IS: Deployment in IP Networks

Written by Stefan Fouant

IS-IS_DeploymentIS-IS: Deployment in IP Networks
by Russ White, Alvaro Retana
Hardcover: 320 pages
Publisher: Pearson Education
ISBN-13: 978-0201657722

2starsBetter off choosing an alternative selection

As IS-IS is one of the more esoteric protocols, understood only by a few people in large scale ISP environments, I thought this book would be a welcome addition to my library as there isn't much else on the market covering this protocol. There are of course ISO 10589 and RFC 1195 which covers these protocols, but seeing as this is a short book I thought it might be able to shed some light on an otherwise complex protocol.

In reviewing this book I've come up disappointed in general. There are certainly a few golden nuggets and I give the book a couple of stars just for attempting to bridge the gap between the purely theoretical and the purely vendor specific. However, the book comes up short on most other points. Often times I found myself wanting to scrap this book in favor of some of the other selections on the market, but since I have respect for these authors I read the whole book hoping that they might be able to redeem themselves by the time I finished.

Obviously the authors have a great deal of knowledge about the subject, and I don't fault them entirely. The quality of the editing is poor with many grammatical and syntactical errors littered throughout the text. There are abundant instances throughout the book where the diagrams used do not match the text describing them. I was rather disappointed because I usually find that Addison-Wesley publishes some of the best texts on the market.

All in all, I thought this book could have been a lot better than it was. After all, these authors have several other titles under their belt, most notably "Advanced IP Network Design". But in this case, I would say that you are better off looking for other similar titles available on the market, such as Jeff Doyle's "Routing TCP/IP Volume 1" or "The Complete IS-IS Routing Protocol" by Hannes Gredler and Walter Goralski.


IP Routing Tutorial

Written by Stefan Fouant

I've given this tutorial quite a few times now and several people have asked me to make it publicly available. This is very much geared towards non-technical folks who would like to have a better understanding of how routing in the Internet works. It covers a brief history of the Internet and evolution of dynamic routing protocols, as well as high-level coverage of link-state vs. distance vector IGPs in addition to discussing EGPs and their role in the exchange of routing information between Autonomous Systems. It also has a few slides on QoS, MPLS, and IPv6. I would still like to make some modifications to a few of the slides as well as adding more content around MPLS and the future of IP.  I will also eventually add a voice-over to turn this into a true slide-cast.  In the meantime, I've put it up on my slideshare account for those of you who would like to have access to it.


Reflections on the Past :: ConneXions – The Interoperability Report

Written by Stefan Fouant

Way back in 1987, before there was the the "Internet Protocol Journal" and other notable publications which cover various aspects of Internet Technologies, there was "ConneXions - The Interoperability Report".   At the time, aside from reading RFCs or Internet-Drafts for more information on various protocols, this publication was the defacto resource for informative analysis of various protocols and their operation.  A quick glance at the articles and you'll see long-time industry heavy-hitters such as Doug Comer, Jon Postel, and Vint Cerf listed as the authors.  This is an invaluable resource for those of you who want to understand the history and evolution of various Internet protocols commonly in use today.