ShortestPathFirst Network Architecture and Design, and Information Security Best Practices

11Aug/119

Black Hat OSPF Vulnerabilities: Much Ado About Nothing

Imagine a group of researchers planning to speak at a conference regarding a previously undiscovered vulnerability present in most homes that would allow a thief to rob your home of its valuables with complete ease.  You would probably be interested in hearing what they had to say so you could take the necessary precautions to protect your home.

Now imagine when they presented their findings, they went on to state that it was incredibly easy to do, so long as you left your front door open and also provided them with the security code for any alarm systems.  You would probably find this implausible and simply the proliferation of fear, uncertainty, and doubt.

That's precisely what happened last week at the well-respected Black Hat security conference in Las Vegas when researchers from the Israel Institute of Technology and Advanced Defense Systems, Ltd. presented their findings of a serious vulnerability present in OSPF.  So serious in fact, the researchers stated the only way to properly mitigate the threat, short of fixing the protocol, is to switch to another routing protocol such as RIP or IS-IS.

The researchers went on to present their research of two previously undiscovered methods of poisoning routing information within an OSPF domain such that bogus information could be propagated and used for a multitude of malicious purposes.

The first attack allows for the creation of a remote false adjacency, whereby an unsuspecting victim router goes on to form an adjacency with a phantom router.  Once this adjacency is formed, the attacker can then flood the victim router with bogus Link-State Advertisements (LSAs).

The second attack allows the flooding of what are known as Disguised LSAs, whereby a victim router would accept spoofed LSAs which it believed were generated by another router within the OSPF domain.

While both of these previously unknown attack vectors might raise some eyebrows and create cause for concern, the researchers made some pretty big assumptions regarding the ability to perform such an attack.  The assumptions were that in order to perform such an attack, one would first need to insert a router into the network, and once this was accomplished they would then somehow need to obtain the MD5 authentication key that is widely used to secure the communications channel between OSPF neighbors.

Without a doubt, there is some merit to what these researchers have found and to a protocol junkie like myself I am always interested to learn the inner-workings of these protocols.  However, what they've completely overlooked is the first tenet of information security which is to ensure the physical environment is properly secured.  In most environments, gaining physical access to the network might prove difficult enough.  Nonetheless, if an attacker were able to insert such a device, the net effect would be that such attacks would be short-lived as it would be fairly trivial for a network administrator to isolate the offending device and remove it from the network.

Furthermore, and perhaps the biggest barrier to entry, they've failed to recognize that the likelihood of someone obtaining the authentication key.  While MD5 is known to have some serious deficiencies and vulnerable to collision attacks, it still represents a pretty high bar for the would-be attacker.  And various vendors are already rolling out HMAC-SHA for the authentication function within OSPFv2.

In addition, the researchers admitted that in their testing of the first exploit, the victim router tore down the adjacency to the phantom router after 125 seconds because the victim router did not receive the proper acknowledgement packets from the phantom router.  Once again, the net effect is that if the attackers were able to meet the first two assumptions, their attack would still be relatively short-lived.

While OSPF is widely used throughout the Internet as a whole, you probably have better odds of winning the lottery than actually experiencing an attack such as these researchers have demonstrated.  The researchers presented some pretty bold claims, but at the end of the day didn't really present anything that wasn't already widely known about the vulnerabilities inherent within the OSPF protocol given the assumptions the researchers made.  Their assumptions are based on two barriers to entry, which doesn't make it impossible, but nonetheless incredibly difficult and highly implausible.  In my experience, attackers tend to move onto far easier targets.

There are a lot of things to worry about when it comes to protecting your corporate assets, but in my humble opinion, this is not one of them.  So rather than ripping out OSPF as these researchers would have you believe, you might perhaps be better served by taking a second look at your environment and ensuring the physical premises are properly secured.

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

21Jun/1112

JNCIE Tips from the Field :: Summarization Made Easy

Today we'll start with a series of articles covering tips and techniques that might be utilized by JNCIE candidates, whether pursuing the JNCIE-SP, JNCIE-ENT, or even the JNCIE-SEC.  The tips and techniques I will be covering might prove to be useful during a lab attempt but could also be used in real-world scenarios to save time and minimize configuration burden in addition to eliminating mistakes that might otherwise be made.  I want everyone to understand that what I am about to write is simply a technique.  I am not divulging any materials or topics which are covered under NDA.

NOTE: For full disclosure, I must reveal that I am an employee of Juniper Networks in their Education Services department.  As such, I take the responsibility of protecting the content and integrity of the exam as well as the certification credentials very seriously.  I would never reveal anything which would allow a candidate to have in-depth knowledge of any specific topics or questions that may appear on the exam.  Not only that, I worked REALLY, REALLY hard to achieve my JNCIE certifications, and I believe everyone else should too! It's certainly more rewarding that way too don't you think?!

So without further delay, let's take a look at today's technique.

It is well known that sumarization is a key aspect of any type of practical exam involving routing of some sort.  Those who have ever taken a CCIE Routing & Switching or CCIE Service Provider exam can attest, summarization is one thing every expert level candidate needs to master.  It is no different with Juniper.  In fact, Juniper's certification web page clearly lists as one of the JNCIE-ENT exam objectives the requrement to "Filter/summarize specific routes". 

What I will show you next is a technique which I find quite handy when attempting to determine the best summary for a given route, and you can do so without having to resort to pen and paper and figuring it out the old fashioned way, i.e. looking at prefixes in binary. This technique, rather, allows you to use the power of Junos to your advantage to perform these tasks.  What I will reveal will also show you a fundamental difference between IOS and Junos and highlights why I believe Junos to be a more flexible, powerful, and superior network operating system.  You simply can't do what I am about to do on a Cisco platform running IOS.

So let's start off by looking at a diagram.  Let's say we have a network that has several OSPF areas, and we must summarize some information for each respective area towards the backbone without encompassing routing information that might exist outside of that area.

ospf-summarization-example

Here we can see we have a backbone area, consisting of two routers, P1 and P2.  P1 is acting as an ABR for Area 1 and is connected to both R1 and R2. P2 is acting as an ABR for Area 2 and is connected to R3.  As you can see from the diagram, I have configured more than a single set of IP addresses on many of the physical interfaces as well as the loopbacks.  This way I can represent many more networks and therefore create multiple Network LSAs for purposes of summarization.

So let's assume that we need to create the largest aggregate possible for a particular area and advertise only that aggregate towards the core without encompassing any routes which might be outside the area from which the summary describes.  Now normally, one would take a look at the diagram, get out a pen and paper, and start a lengthy exercise of supernetting based on binary addresses.  This can take several minutes or more and is valuable time that could certainly be used on wide variety of other more important tasks like setting up MPLS LSPs or troubleshooting that Layer 2 VPN connectivity.  So let's take a look at a simple trick that actually takes advantage of Junos to determine what the summary should be.

What we are going to do is take advantage of a feaure inside Junos which automatically shows us a range of prefixes which match a given CIDR block.  The Junos operating system has built-in route matching functionality which allows us to specify a given CIDR block and returns all routes with a mask length equal to or greater than that which is specified.  So by applying this principle, what we want to do is look at the diagram for a particular area, choose the lowest IP within that area as our base, and then apply a subnet mask to it which attempts to encompass that route as well as others. 

For example, looking at this diagram, we see that the lowest IP address being used in Area 1 is the 168.10.32.1 address assigned to R1's loopback.  So let's start by using this as our base for our summary, and then simply apply a subnet mask to it which we think might encompass additional routes:

sfouant@p1# run show route 168.10.32.0/22   
inet.0: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
168.10.32.1/32     *[OSPF/10] 8w4d 19:04:48, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.2/32     *[OSPF/10] 8w4d 19:04:48, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.3/32     *[OSPF/10] 8w4d 19:04:43, metric 1
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
168.10.32.4/32     *[OSPF/10] 8w4d 19:04:43, metric 1
                    > to 168.10.48.10 via ge-0/0/1.0
                      to 168.10.60.10 via ge-0/0/1.0


Note: We can do this on any router within Area 1 since the Link-State Database is the same on all devices, but I prefer to perform the work on the ABR since this is where I will be performing the aggregation.  Also, the ABR may have other local and/or direct routes (or perhaps routes from other protocol sources) so we want to see things from the perspective of the ABR.

What we see here is that we have just now determined the summary route which in fact encompasses all the loopback addresses on both R1 as well as R2, but we need to keep going because this doesn't incorporate the Gigabit Ethernet links between all the devices:

sfouant@p1# run show route 168.10.32.0/21   
inet.0: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
168.10.32.1/32     *[OSPF/10] 8w4d 19:04:50, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.2/32     *[OSPF/10] 8w4d 19:04:50, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.3/32     *[OSPF/10] 8w4d 19:04:45, metric 1
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
168.10.32.4/32     *[OSPF/10] 8w4d 19:04:45, metric 1
                    > to 168.10.48.10 via ge-0/0/1.0
                      to 168.10.60.10 via ge-0/0/1.0


Not quite. Let's keep trying:

sfouant@p1# run show route 168.10.32.0/20   
inet.0: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
168.10.32.1/32     *[OSPF/10] 8w4d 19:04:55, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.2/32     *[OSPF/10] 8w4d 19:04:55, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.3/32     *[OSPF/10] 8w4d 19:04:50, metric 1
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
168.10.32.4/32     *[OSPF/10] 8w4d 19:04:50, metric 1
                    > to 168.10.48.10 via ge-0/0/1.0
                      to 168.10.60.10 via ge-0/0/1.0


Nope, still not there yet. Let's try again:

sfouant@p1# run show route 168.10.32.0/19   
inet.0: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
168.10.32.1/32     *[OSPF/10] 8w4d 19:04:58, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.2/32     *[OSPF/10] 8w4d 19:04:58, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.3/32     *[OSPF/10] 8w4d 19:04:53, metric 1
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
168.10.32.4/32     *[OSPF/10] 8w4d 19:04:53, metric 1
                    > to 168.10.48.10 via ge-0/0/1.0
                      to 168.10.60.10 via ge-0/0/1.0
168.10.48.4/30     *[OSPF/10] 00:36:26, metric 2
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
                      to 168.10.60.2 via ge-0/0/0.0
168.10.48.8/30     *[Direct/0] 8w4d 19:36:13
                    > via ge-0/0/1.0
168.10.48.9/32     *[Local/0] 8w4d 19:36:13
                      Local via ge-0/0/1.0
168.10.60.0/30     *[Direct/0] 8w4d 19:51:31
                    > via ge-0/0/0.0
168.10.60.1/32     *[Local/0] 8w4d 19:51:31
                      Local via ge-0/0/0.0
168.10.60.4/30     *[OSPF/10] 00:36:26, metric 2
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
                      to 168.10.60.2 via ge-0/0/0.0
168.10.60.8/30     *[Direct/0] 8w4d 19:36:13
                    > via ge-0/0/1.0
168.10.60.9/32     *[Local/0] 8w4d 19:36:13
                      Local via ge-0/0/1.0


Ok, this looks more like it.  Here we can see we have all the Gigabit Ethernet links connecting all devices, as well as the loopback addresses.  This might be a suitable summary.  Let's keep going to see what happens:

sfouant@p1# run show route 168.10.32.0/18   
inet.0: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
168.10.0.1/32      *[Direct/0] 8w4d 19:51:37
                    > via lo0.0
168.10.0.2/32      *[Direct/0] 8w4d 19:36:19
                    > via lo0.0
168.10.0.3/32      *[OSPF/10] 00:28:41, metric 1
                    > to 168.10.18.2 via fe-0/0/2.0
                      to 168.10.26.2 via fe-0/0/2.0
168.10.0.4/32      *[OSPF/10] 00:28:41, metric 1
                    > to 168.10.18.2 via fe-0/0/2.0
                      to 168.10.26.2 via fe-0/0/2.0
168.10.18.0/30     *[Direct/0] 8w4d 19:05:28
                    > via fe-0/0/2.0
168.10.18.1/32     *[Local/0] 8w4d 19:05:28
                      Local via fe-0/0/2.0
168.10.26.0/30     *[Direct/0] 8w4d 19:05:28
                    > via fe-0/0/2.0
168.10.26.1/32     *[Local/0] 8w4d 19:05:28
                      Local via fe-0/0/2.0
168.10.32.1/32     *[OSPF/10] 8w4d 19:05:04, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.2/32     *[OSPF/10] 8w4d 19:05:04, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.3/32     *[OSPF/10] 8w4d 19:04:59, metric 1
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
168.10.32.4/32     *[OSPF/10] 8w4d 19:04:59, metric 1
                    > to 168.10.48.10 via ge-0/0/1.0
                      to 168.10.60.10 via ge-0/0/1.0
168.10.48.4/30     *[OSPF/10] 00:36:32, metric 2
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
                      to 168.10.60.2 via ge-0/0/0.0
168.10.48.8/30     *[Direct/0] 8w4d 19:36:19
                    > via ge-0/0/1.0
168.10.48.9/32     *[Local/0] 8w4d 19:36:19
                      Local via ge-0/0/1.0
168.10.60.0/30     *[Direct/0] 8w4d 19:51:37
                    > via ge-0/0/0.0
168.10.60.1/32     *[Local/0] 8w4d 19:51:37
                      Local via ge-0/0/0.0
168.10.60.4/30     *[OSPF/10] 00:36:32, metric 2
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
                      to 168.10.60.2 via ge-0/0/0.0
168.10.60.8/30     *[Direct/0] 8w4d 19:36:19
                    > via ge-0/0/1.0
168.10.60.9/32     *[Local/0] 8w4d 19:36:19
                      Local via ge-0/0/1.0


Clearly from this command, we can see we have now gone beyond what might be considered a suitable summary because we are now encompassing routes that exist within the backbone Area 0.  So it should be clear from this simple set of commands that the 168.10.32.0/19 would be a suitable address to use for our summary.

We could easily apply a similar example to Area 2 to quickly determine what the best summary would be.  We see from looking at the diagram the lowest IP within Area 2 is the 168.10.96.1 loopback address applied to R3.  When we use that as our base and go through the steps above, we can find our summary:

sfouant@p2# run show route 168.10.96.0/19   
inet.0: 27 destinations, 27 routes (27 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
168.10.96.1/32     *[OSPF/10] 01:00:57, metric 1
                      to 168.10.112.2 via fe-0/0/3.0
                    > to 168.10.118.2 via fe-0/0/3.0
168.10.96.2/32     *[OSPF/10] 01:00:57, metric 1
                    > to 168.10.112.2 via fe-0/0/3.0
                      to 168.10.118.2 via fe-0/0/3.0
168.10.112.0/30    *[Direct/0] 01:13:48
                    > via fe-0/0/3.0
168.10.112.1/32    *[Local/0] 01:13:48
                      Local via fe-0/0/3.0
168.10.118.0/30    *[Direct/0] 01:13:48
                    > via fe-0/0/3.0
168.10.118.1/32    *[Local/0] 01:13:48
                      Local via fe-0/0/3.0


And there you have it! As you can see it's really quite simple and if you haven't stumbled upon this already you may be saying to yourself, "Why didn't I think of that before?".  I hear from many candidates that they spend considerable time the old fashioned way to determine summaries and I always ask myself why.  As you can see, there is an easier way!

Clearly the benefit to using a technique such as the above is to easily find the routes that best summarize a bunch of more specific routes.  The utility of such an approach, while very useful during a practical exam, might be considerably lessened in the real-world where it is likely that hierarchy has already been built into the network and you have network diagrams at your disposal.  On the other hand, there may be situations where you inherit a network that was developed with hierarchy in mind, however summarization was never employed, or it was employed improperly.  In such cases, the above technique can be a real time saver, allowing you to spend less time doing binary math and more time doing the fun stuff - like troubleshooting why that MPLS LSP isn't getting established!

Stay tuned for additional articles covering time saving tips and techniques which can be used during your next lab attempt!  Good luck, and may the force be with you!

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

30Jul/100

Interview with Chris Grundemann, Author of ‘Day One: Exploring IPv6′

Spend a little time in Juniper, ARIN or a wide variety of other networking forums, and you'll likely see the name Chris Grundemann.  Recently, we had the opportunity to catch up with him, and discuss the nature of his involement in deploying IPv6 at tw telecom, as well as his recently published Juniper booklet entitled "Day One: Exploring IPv6".


Thanks Chris for joining us today.  Tell us a little bit about yourself and your career experience, and specifically tell us about your day-to-day experience working with IPv6.

Certainly. Career-wise, I am currently engaged as a Network Architect with tw telecom inc. where I am responsible for setting forward looking architectures and leading various technology development efforts. I am also the Founding Chair of the Colorado Chapter of the Internet Society, Founding Editor of Burning with the Bush and an active participant (and current AC nominee) in the ARIN policy development process. Obviously I am also the author of the Juniper "Day One: Exploring IPv6” booklet.

My day-to-day experience with IPv6 is actually pretty minimal at this point. Last year while I was still on the IP backbone team here at tw telecom, I rolled out IPv6 across all of our PE routers - in one night. Since then, there has been very little technical work needed from a networking perspective. We still have plenty of work to fully operationalize IPv6 but it is mostly systems and process issues now, much less exciting.

For any readers who are interested, you can find a lot more about me on my personal site. This includes links to my Facebook and LinkedIn profiles, so feel free to send me an invite to connect!

You rolled out IPv6 across all of your PE routers in a single night! That's a pretty big accomplishment. Would you say that Juniper's implementation of IPv6 made it easy to deploy and support IPv6 across a large number of devices?

Thanks! There was of course plenty of preparation leading up to that night, but we "flipped the switch" all at once and it went extremely smooth.

All of Juniper's carrier routers forward IPv6 in hardware, which is huge. Also, IPv6 was integrated into Junos very well, most of the commands are similar if not the same between IPv4 and IPv6. This makes it really easy operationally speaking. So, yes, I would definitely agree that Juniper's implementation of IPv6 makes it easy to deploy and scale.

Ok, so let's specifically talk about the current state of affairs with IPv6.  Hurricane Electric, one of the leading providers of IPv6 connectivity, states that as of the time of this writing we have less than a year remaining until complete IPv4 exhaustion.  This is based on the fact that there are only sixteen /8 network blocks available for allocation (approximately 6%).    We've heard figures such as this for many years now, but techniques like NAT have allowed people to extend the length of the existing IPv4 address pool.  Based on your experience working with IPv6 and also your involvement with ARIN, can you help us to understand what is fact and what is fiction - how long do you really think we have before total address exhaustion becomes a reality and customers will have no choice but to start looking at IPv6 for future deployments?

Let me re-phrase your query into two distinct questions if I may: How long do we have with IPv4 and when will network operators be forced to consider IPv6 deployment? The answers are very different so I think they should be addressed individually.

First, How long do we have with IPv4? As you state, Hurricane Electric's widget gives us less than a year. But let's start with a quick level-set. There are actually three distinct points leading up to what I would call "complete IPv4 exhaustion." The first is IANA unallocated pool exhaustion. This is the point when the global pool of IPv4 /8s designated for unicast routing reaches 5 remaining and subsequently each of the 5 RIRs receives one (thus depleting the unallocated pool completely). The second point is RIR exhaustion, when the Regional Internet Registries can no longer allocate nor assign IPv4 addresses that they received from IANA (because they don't have any). Finally, true exhaustion happens when the ISPs/LIRs exhaust their remaining IPv4 addresses and end users simply cannot get a routable IPv4 address.

As I understand it, Hurricane Electric is getting their data from the IPv4 Address Report built by Geoff Huston and are predicting the date of the first point; exhaustion of the IANA IPv4 unallocated address pool. As of today that date is projected to be 1 July, 2011 - less than a year away. However, this projection is based on the current and historical run-rate, on how fast we have consumed IPv4 addresses up to this point. Because so many folks have not paid attention to IPv6 and are still wholly dependent on IPv4, it is quite likely that the run-rate will increase, perhaps drastically, as we get closer to IANA unallocated pool exhaustion. If this happens, we actually have much less than one year before reaching that first point.

Predicting the second point gets a little murkier, because different folks define this point differently. Should we declare that RIR exhaustion is upon us when the first RIR runs out of unallocated IPv4 address space? When the last one does? Perhaps when the RIR for your region has no unallocated IPv4 to give you? Mr. Huston projects the date "where the first RIR has exhausted its available pool of addresses" and since he has already done all the work, it is a convenient place to set the bar. As of today that date is predicted to be 20 January, 2012. Remember again that this does not take into account any possible run on IPv4 addresses that may happen between now and then and that other RIRs will have IANA allocated IPv4 space for some time after that date.

The final point is the hardest one to pin down. This is mostly because it would be very hard, if not impossible to quantify how much currently allocated/assigned address space is unused or underused.

Many ISPs may be able to feed off of current reserves for months or even years, while many more will run out of IPv4 addresses within weeks of receiving their last traditional allocation from their RIR.

You also have to take into account things like IPv4 address transfers which are now allowed in many regions, other possible policy changes and transition technologies such as carrier-grade-NAT (CGN). All of these things pull IPv4 use in different directions. So no one can intelligently predict this final date.

Although I cannot tell you that IPv4 will be dead in X years, there are some very important facts that we should not overlook. The first is that Geoff Huston's projections have remained quite consistent over the past two years, and the time remaining has steadily decreased over those two years. The second is that we are running out of usable IPv4 addresses. NAT was a stop gap to allow folks time to adopt IPv6. That time has largely been wasted unfortunately. The bottom line is that IPv4 will continue to become more expensive to use on interconnected networks while IPv6 continues to become less expensive.

This is where the second question comes into play: When will network operators be forced to look at IPv6 deployment? The truth is that they should be looking into it now. If you are not adding only IPv6 capable hardware and software to your network now - you are going to be forced to spend extra money upgrading sooner than you would like. As IPv4 becomes ever more expensive (both directly as ISPs charge more for it or you are forced to pay for addresses through a transfer and indirectly as CGNs and other transition mechanisms drive up operational costs), many will turn to IPv6 - more and more as the next two years play out. Businesses that have IPv6 capable networks now will have a competitive advantage over those who are forced to upgrade their network to get IPv6 connectivity.

An often overlooked aspect of this question is security. If your network is not IPv6 enabled today, you likely have IPv6 traffic being tunneled right through your firewalls. Another is mobile access - very soon mobile phone operators will be migrating to 4G technologies that take advantage of IPv6 addressing for all new phones on their networks. These IPv6 mobile devices will be reaching your website(s) via IPv6, if you want them to have the best possible experience your site needs to be running IPv6 natively.  As soon as a website is IPv6 only, ISPs will be required to provide IPv6 connectivity or lose customers to those who do.

So, in short, the answer really is now. Everyone should be thinking of IPv6 when planning all future network deployments, starting now (if not yesterday).

Many industry experts are already speculating that an IPv4 black market will exist because of the depletion of IPv4 address space and the lack of a large IPv6 installed base. Do you suspect there will be a black market for IPv4 addresses and what impacts might this have?

The answer varies a bit depending on how you choose to define black market. Under many definitions, that market already exists. I think that it already impacts us and that it will get worse as we near and ultimately cross the free pool depletion threshold. Think spammers and phishers operating out of address blocks that they beg, borrow, steal and often "rent" or "buy." There are also instances where much more legitimate businesses make back-room deals for the use of IP addresses.

Overall some of the most negative impacts surround the WHOIS database, and the integrity of the data it contains. When folks get addresses through grey or black markets, instead of from the RIR they are probably not going to report proper reassignment registration information to the RIR. This leads to stale WHOIS data which makes troubleshooting and abuse reporting much harder for operators and investigation much harder for law enforcement. I helped author a recently adopted ARIN policy change to start addressing some of this and am actually spearheading an effort to continue that work with another policy proposal in the ARIN region as we speak.

Another concern is prefix hijacking, this is not really a black market issue but is another facet of the problems we will face more and more as IPv4 gets more expensive, unless and until IPv6 adoption picks up across the board.

There is a lot of work going on right now within ARIN, the IETF and other RIRs to try and limit the impacts of any IPv4 black market, other abuses and also ease the overall IPv4-IPv6 transition. Anyone interested in this work should join the ARIN-PPML (Public Policy Mailing List) [http://lists.arin.net/mailman/listinfo/arin-ppml] (or their local equivalent) and/or show up at a meeting; and join the conversation. ARIN and the other RIRs are open, transparent, ground-up organizations and your voice can make a huge impact.

It has been observed that the number of Autonomous Systems supporting IPv6 as well as IPv6 DNS Queries, in the form of AAAA records, have significantly increased in the last several years.  Have we reached that critical mass where widespread adoption is imminent and if so what can we expect to see in the next few years?

I think that widespread adoption is imminent now but I don't believe that it is an issue of critical mass, more an issue of network operators starting to see the dates we discussed above nearing. I make the distinction because there is still very little IPv6 traffic actually flowing. What I think is happening is that the folks who have been watching this are getting a sense of urgency that is finally breaking through to the folks who write the checks. The real critical mass moment will be when we see IPv6 traffic levels really climbing and IPv4 traffic growth starting to slow. I think you can expect to see that in the next few years. Within five certainly and probably within three.

Let's talk for a minute about the Juniper "Day One" guides.  Can you tell us what they are all about, and more specifically, tell us a little bit about the "Day One: Exploring IPv6" guide that you've written.  What is it all about and what can potential readers hope to gain from reading it?

The Day One guides are exactly what the name implies, they are booklets that give you everything you need to know to get through your first day working on the covered topic. They are hands-on, example-driven, cut-to-the-chase primers on all sorts of topics surrounding Juniper Networks gear and the Junos OS. In "Exploring IPv6" I tried to really provide that common sense starting point for implementing IPv6. The booklet covers enabling IPv6, adding IPv6 addresses to interfaces, configuring static routes in IPv6, implementing IPv6 IGPs (RIPng, IS-IS and OSPF v3) as well as all the basic verification and troubleshooting that surrounds those topics. If you follow along through the book examples and work through all of the "try it yourself" exercises, you should gain a solid understanding of IPv6 LANs and how IPv6 is implemented in JUNOS, as well as a great general / vendor-agnostic view of IPv6 itself and how it differs from IPv4.

Tell us a little bit about the "Advanced IPv6" Day One Guide that you are currently working on?  What should the network practitioner hope to gain from reading it?

Advanced IPv6 is kind of a "Day Two" guide on IPv6 in JUNOS. It continues right where Exploring IPv6 left off and moves onto more advanced topics such as BGP, VRRP, CoS, Multicast and system management. It takes you another big step towards being able to fully implement IPv6 in a production environment.

After you are done with "Advanced IPv6" do you have any other writing aspirations?

I do. Writing is definitely work but I am finding that it's work I really enjoy. Hopefully others like my writing in these Day One booklets and that gives me the opportunity to continue writing!

Thanks for joining us today Chris.  This has been extremely informative and we are all really excited about reading your next Day One guide and are anxiously awaiting its arrival!

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

27Jul/101

Book Review :: JUNOS High Availability: Best Practices for High Network Uptime

JUNOS_High_AvailabilityJUNOS High Availability: Best Practices for High Network Uptime
by James Sonderegger, Orin Blomberg, Kieran Milne, Senad Palislamovic
Paperback: 688 pages
Publisher: O'Reilly Media
ISBN-13: 978-0596523046

5starsHigh Praises for JUNOS High Availability

Building a network capable of providing connectivity for simple business applications is a fairly straightforward and well-understood process. However, building networks capable of surviving varying degrees of failure and providing connectivity for mission-critical applications is a completely different story. After all, what separates a good network from a great network is how well it can withstand failures and how rapidly it can respond to them.

While there are a great deal of books and resources available to assist the network designer in establishing simple network connectivity, there aren't many books which discuss the protocols, technologies, and the myriad ways in which high availability can be achieved, much less tie it all together into one consistent thread. "JUNOS High Availability" does just that, in essence providing a single, concise resource covering all of the bits and pieces which are required in highly available networks, allowing the network designer to build networks capable of sustaining five, six, or even seven nines of uptime.

In general, there are a lot of misconceptions and misunderstandings amongst Network Engineers with regards to implementing high availability in Junos. One only needs to look at the fact that Graceful Restart (GR) protocol extensions and Graceful Routing Engine Switchover (GRES) are often mistaken for the same thing, thanks in no small part to the fact that these two technologies share similar letters in their acronyms. This book does a good job of clarifying the difference between the two and steers clear of the pitfalls typically prevalent in coverage of the subject matter. The chapter on 'Control Plane High Availability' covers the technical underpinnings of the underlying architecture on most Juniper platforms; coverage of topics like the separation between the control and forwarding planes, and kernel replication between the Master and Backup Routing Engine give the reader a solid foundation to understand concepts like Non-Stop Routing, Non-Stop Bridging, and In-Service Software Upgrades (ISSU). In particular I found this book to be very useful on several consulting engagements in which seamless high availability was required during software upgrades as the chapter on 'Painless Software Upgrades' discusses the methodology for achieving ISSU and provides a checklist of things to be performed before, during, and after the upgrade process. Similarly, I found the chapter on 'Fast High Availability Protocols' to be very informative as well, providing excellent coverage of BFD, as well as the differences between Fast Reroute vs. Link and Node Protection.

Overall I feel this book is a valuable addition to any networking library and I reference it often when I need to implement certain high availability mechanisms, or simply to evaluate the applicability of a given mechanism versus another for a certain deployment. The inclusion of factoring costs into a high availability design is a welcome addition and one that all too many authors fail to cover. Naturally, it only makes sense that costs should be factored into the equation, even when high availability is the desired end-state, in order to ensure that ultimately the business is profitable. If I had to make one suggestion for this book it is that there should be additional coverage of implementing High Availability on the SRX Series Services Gateways using JSRP, as this is a fundamental high availability component within Juniper's line of security products. To the authors credit however, this book was written just as the SRX line was being released, so I don't fault the authors for providing limited coverage. Perhaps more substantial coverage could be provided in the future if a Second Edition is published.

The bottom line is this - if you are a Network Engineer or Architect responsible for the continuous operation or design of mission-critical networks, "JUNOS High Availability" will undoubtedly serve as an invaluable resource. In my opinion, the chapters on 'Control Plane High Availability', 'Painless Software Upgrades', and 'Fast High Availability Protocols' are alone worth the entire purchase price of the book. The fact that you get a wealth of information beyond that in addition to the configuration examples provided makes this book a compelling addition to any networking library.

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

1Feb/101

What’s the BFD with BFD?

Many networks today are striving for "five nines" high availability and beyond. What this means is that network operators must configure the network to detect and respond to network failures as quickly as possible, preferably on the order of milliseconds. This is in contrast to the failure detection inherent in most routing protocols, which is typically on the order of several seconds or more. For example, the default hold-time for BGP in JUNOS is 90 seconds, which means that in certain scenarios BGP will have to wait for upwards of 90 seconds before a failure is detected, during which time a large percentage of traffic may be blackholed. It is only after the failure is detected that BGP can reconverge on a new best path.

Another example is OSPF which has a default dead interval of 40 seconds, or IS-IS which has a default hold-time of 9 seconds (for DIS routers), and 27 seconds (for non-DIS routers). For many environments which support mission-critical data, or those supporting Voice/Video or any real-time applications, any type of failure which isn't detected in the sub-millisecond range is too long.

While it is possible to lower timers in OSPF or IS-IS to such an extent that a failure between two neighbors can be detected rather quickly (~1 second), it comes at a cost of increased protocol state and considerable burden on the Routing Engine's CPU.  As an example, let us consider the situation in which a router has several hundred neighbors. Maintaining subsecond Hello messages for all of these neighbors will dramatically increase the amount of work that the Routing Engine must perform. Therefore, it is a widely accepted view that a reduction in IGP timers is not the overall best solution to solve the problem of fast failure detection.

Another reason that adjusting protocol timers is not the best solution is that there are many protocols which don't support a reduction of timers to such an extent that fast failure detection can be realized. For example, the minimum BGP holdtime is 20 seconds, which means that the best an operator can hope for is a bare minimum of 20 seconds for failure detection.

Notwithstanding, this does nothing about situations in which there is no protocol at all, for example, Ethernet environments in which two nodes are connected via a switch as can be seen in the figure below.  In this type of environment, R1 has no idea that R2 is not reachable, since R1's local Ethernet segment connected to the switch remains up.  Therefore, R1 can't rely on an 'Interface Down' event to trigger reconvergence on a new path and instead must wait for higher layer protocol timers to age out before determining that the neighbor is not reachable.  (Note to the astute reader: Yes, Ethernet OAM is certainly one way to deal with this situation, but that is a discussion which is beyond the scope of this article).

L2-Connectivity

Essentially, at the root of the problem is either a lack of suitable protocols for fast failure detection of lower layers, or worse, no protocol at all.  The solution to this was the development of Bidirectional Forwarding Detection, or BFD, developed jointly by Cisco and Juniper.  It has been widely deployed and is continuing to gain widespread acceptance, with more and more protocols being adapted to use BFD for fast failure detection. 

So what is the Big Freaking Deal with Bidirectional Forwarding Detection anyway and why are so many operators implementing it in their networks?  BFD is a simplistic hello protocol with the express purpose of rapidly detecting failures at lower layers.  The developers wanted to create a low overhead mechanism for exchanging hellos between two neighbors without all the nonessential bits which are typical in an IGP hello or BGP Keepalives.  Furthermore, the method developed had to be able to quickly detect faults in the Bidirectional path between two neighbors in the forwarding plane.  Originally, BFD was developed to provide a simple mechanism to be used on Ethernet links, as in the example above, prior to the development of Ethernet OAM capabilities.  Hence, BFD was developed with this express purpose in mind with the intent of providing fault identification in an end-to-end path between two neighbors.

Once BFD was developed, the protocol designers quickly found that it could be used for numerous applications beyond simply Ethernet.  In fact, one of the main benefits of BFD is that it provides a common method to provide for failure detection for a large number of protocols, allowing a singular, centralized method which can be reused.  In other words, let routing protocols do what they do best - exchange routing information and recalculate routing tables as necessary, but not perform identification of faults at lower layers.  An offshoot of this is that it allows network operators to actually configure higher protocol timer values for their IGPs, further reducing the burden placed on the Routing Engine.

BFD timers can be tuned such that failure detection can be realized in just a few milliseconds, allowing for failure and reconvergence to take place in similar timeframes to that of SONET Automatic Protection Switching.  A word of caution - while BFD can dramatically decrease the time it takes to detect a failure, operators should be careful when setting the intervals too low.  Very aggressive BFD timers could cause a link to be declared down even when there is only a slight variance in the link quality, which could cause flapping and other disastrous behavior to ensue.  The best current practice with regards to BFD timers is to set a transmit and receive interval of 300ms and a multiplier of 3, which equates to 900ms for failure detection.  This is generally considered fine for most environments, and only the most stringent of environments should need to set their timers more aggressive than this.

One question that is commonly asked is how is it that BFD can send hello packets in the millisecond range without becoming a burden on the router.  The answer to this question lies in the fact that BFD was intended to be lightweight and run in the forwarding plane, as opposed to the control plane (as is the case with routing protocols).  It is true that while early implementations of BFD ran on the control plane, most of the newer implementations run in the forwarding plane, taking advantage of the dedicated processors built into the forwarding plane and alleviating the burden which would otherwise be place on the RE.  In JUNOS releases prior to JUNOS 9.4, BFD Hello packets were generated via RPD running on the RE.  In order to enable BFD to operate in the PFE in JUNOS versions prior to JUNOS 9.4, the Periodic Packet Management Daemon (PPMD) had to be enabled, using the command 'set routing-options ppm delegate processing'.  In JUNOS 9.4 and higher this is the default behavior and BFD Hello packets are automatically handled by PPMD operating within the PFE.

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

18Jan/103

Facilitating Firewall Filter Configuration in JUNOS using ‘apply-path’

Undoubtedly, one of the coolest features in JUNOS is the apply-path statement. Using apply-path, an operator can configure a prefix-list which comprises IP prefixes linked to a defined path within JUNOS. This facilitates tasks like configuring firewall filters to allow traffic from configured BGP neighbors, making them highly dynamic.

For example, suppose we have the following configuration:

system {
    ntp {
        server 192.168.70.10;
        server 192.168.72.10;
    }
}
protocols {
    bgp {
        group ibgp {
            type internal;
            neighbor 192.168.50.100;
            neighbor 192.168.50.101;
            neighbor 192.168.50.102;
            neighbor 192.168.50.103;
            neighbor 192.168.50.104;
            neighbor 192.168.50.105;
        }
    }
}

Let's assume we want to restrict BGP and NTP traffic to only those peers with whom we wish to peer with.  Without the use of apply-paths, we'd have to configure a prefix-list or add individual source-address statements comprising all of the above addresses and add that to our firewall filter configuration, as in the following:

policy-options {
    prefix-list bgp-peers {
        192.168.50.100/32;
        192.168.50.101/32;
        192.168.50.102/32;
        192.168.50.103/32;
        192.168.50.104/32;
        192.168.50.105/32;

    }
    prefix-list ntp {
        192.168.70.10/32;
        192.168.72.10/32;
    }
}
firewall {
    filter re-protect {
        term bgp {
            from {
                prefix-list {
                    bgp-peers;
                }
                protocol tcp;
                port bgp;
            }
            then accept;
        }
        term ntp {
            from {
                prefix-list {
                    ntp;
                }
                protocol udp;
                port ntp;
            }
            then accept;
        }
    }
}

As you can see, that's a lot of redundant typing especially considering we've already configured these under their respective configuration stanzas within JUNOS.  The above example might not seem like a lot, but if you're looking at an ISP edge router it wouldn't be uncommon to see several hundred such BGP neighbor statements.  Now multiply that across all of your devices in your network and you will quickly see that prefix-list maintenance can quickly become cumbersome.

Enter the apply-path statement.  The idea behind the apply-path statement was to create a method whereby prefix-lists could be created dynamically by simply referencing other portions of the configuration, thus eliminating most of the effort required to maintain a prefix list.  The path consists of elements separated by spaces.  Each element matches a specific keyword or identifier within the configuration, and you can use wildcards to match more than one identifier.  Wildcards must be enclosed in angle brackets, for example, <*>.

As an example, let us take a look at how the above prefix-list definition could be simplified:

policy-options {
    prefix-list bgp-peers {
        apply-path "protocols bgp group <*> neighbor <*>";
    }
    prefix-list ntp {
        apply-path "system ntp server <*>";
    }
}

With the above we've eliminated 8 lines of code and replaced it with 2.  The apply-path statement is telling the prefix-list to be generated dynamically by looking at the "protocols bgp group <*> neighbor <*>" and "system ntp server <*>" portions of the configuration. 

Although this has simplified the configuration greatly, it comes at the expense of being able to easily identify which addresses are part of a prefix-list.  In order to properly determine if the apply-path statement is working correctly and also to identify those addresses which are part of the prefix list, we can use the "show | display inheritence" command, as in the following:

root@jncie-lab# show | display inheritance
prefix-list bgp-peers {
    ##
    ## apply-path was expanded to:
    ##     192.168.50.100/32;
    ##     192.168.50.101/32;
    ##     192.168.50.102/32;
    ##     192.168.50.103/32;
    ##     192.168.50.104/32;
    ##     192.168.50.105/32;
    ##
    apply-path "protocols bgp group <*> neighbor <*>";
}
prefix-list ntp {
    ##
    ## apply-path was expanded to:
    ##     192.168.70.10;
    ##     192.168.72.10;
    ##
    apply-path "system ntp server <*>";
}

In closing, the use of apply-paths in an operational network assists in hardening a network, and can dramatically reduce the operational overhead of maintaining prefix-lists.  In addition, it also helps to eliminate configuration errors in which the address added to the prefix list don't match that configured in respective portions within the JUNOS configuration.

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

8Dec/090

Implementing Provider-Provisioned VPNs using Route Reflectors

MPLS/BGP Provider-Provisioned VPNs, such as those proposed in RFC 4364 (formerly RFC 2547) or draft-kompella variants, suffer from some scalability issues due to the fact that all PE routers are required to have a full iBGP mesh in order to exchange VPN-IPv4 NLRI and associated VPN label information.  In a modern network consisting of a large number of PE devices, it becomes readily apparent that this requirement can quickly become unmanageable.

The formula to compute the number of sessions for an iBGP full mesh is n * (n-1)/2.  10 PE devices would only require a total of 45 iBGP sessions (10 * (9)/2 = 45).  However, by simply adding 5 additional PEs into this environment your total number of sessions increases exponentially to 105.  Scalability issues arise because maintaining this number of iBGP sessions on each PE is an operational nightmare; similarly control plane resources are quickly exhausted.

An alternative to this that has gained widespread adoption is to utilize Route Reflectors to reflect the VPN-IPv4 NLRI and associated VPN label between PE devices.  However, several issues arise when using Route Reflectors in such an environment.  In a normal environment without the use of Route Reflectors, MPLS tunnels exist between each PE router such that when the VPN-IPv4 NLRI and associated VPN label are received, a PE router can recurse through its routing table to find the underlying MPLS tunnel used to reach the remote BGP next-hop within the VPN-IPv4 NLRI.  In the Route Reflection model, the Route Reflector typically doesn't have an MPLS tunnel to each PE for which it is receiving VPN-IPv4 NLRI.  Therefore, these routes never become active and are therefore not candidates for reflection back to other client and non-client peers.

A few methods have been developed which circumvent this issue.  One method is to simply define MPLS tunnels from the Route Reflector to each PE.  This solves the problem by allowing the Route Reflector to find a recursive match (i.e. MPLS tunnel) in order to reach the remote PE.  However, this approach suffers from the drawback in that it requires a whole bunch of MPLS tunnels to be configured which only serve to allow the received VPN-IPv4 NLRI to be considered active.  Remember, these tunnels are completely useless in that they will never be used for the actual forwarding of data, they are only used within the control plane to instantiate routes.

An alternative and much more graceful solution to this problem is to configure the Route Reflector with a static discard route within the routing table which is used to reference BGP next-hops in MPLS environments (inet.3 for example in JUNOS).  This static discard route only serves to function as a recursive match when incoming VPN-IPv4 NLRI are received for the express purpose of making these routes active and therefore candidates for redistribution.  In JUNOS, one can accomplish this using the following configuration:

routing-options {
    rib inet.3 {
        static {
            route 0.0.0.0/0 discard;
        }
    }
}

With the above, any VPN-IPv4 NLRI received from a PE router is immediately made active due to the fact that a static route has been created in inet.3 which is the routing table used in JUNOS to recurse for BGP next-hops in MPLS environments.

An excellent whitepaper entitled "BGP Route Reflection in Layer 3 VPN Networks" expands upon this and describes the benefits of using Route Reflection in such environments. It also builds the case for using a distributed Route Reflection design to further enhance scalability and redundancy.

One thing to keep in mind is that with the Route Reflector approach, we have merely moved the problem set from that of the PE device to that of the Route Reflector.  Although it minimizes the number of iBGP sessions required on PE devices, the Route Reflector must be capable of supporting a large number of iBGP sessions and in addition, must be able to store all of the VPN-IPv4 NLRI for all of the VPNs for which it is servicing.  It is highly recommended that adequate amounts of memory are in place on the Route Reflector in order to store this large amount of routing information.

Finally, while using Route Reflectors is an acceptable solution in the interim to addressing scaling concerns with Provider-Provisioned VPNs, it is not clear if this approach is sufficient for the long term.  There are several other options being examined, with some of them outlined in a presentation entitled "2547 L3VPN Control Plane Scaling" given at APRICOT in Kyoto, Japan in 2005.

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

3Dec/090

Book Review :: IS-IS: Deployment in IP Networks

IS-IS_DeploymentIS-IS: Deployment in IP Networks
by Russ White, Alvaro Retana
Hardcover: 320 pages
Publisher: Pearson Education
ISBN-13: 978-0201657722

2starsBetter off choosing an alternative selection

As IS-IS is one of the more esoteric protocols, understood only by a few people in large scale ISP environments, I thought this book would be a welcome addition to my library as there isn't much else on the market covering this protocol. There are of course ISO 10589 and RFC 1195 which covers these protocols, but seeing as this is a short book I thought it might be able to shed some light on an otherwise complex protocol.

In reviewing this book I've come up disappointed in general. There are certainly a few golden nuggets and I give the book a couple of stars just for attempting to bridge the gap between the purely theoretical and the purely vendor specific. However, the book comes up short on most other points. Often times I found myself wanting to scrap this book in favor of some of the other selections on the market, but since I have respect for these authors I read the whole book hoping that they might be able to redeem themselves by the time I finished.

Obviously the authors have a great deal of knowledge about the subject, and I don't fault them entirely. The quality of the editing is poor with many grammatical and syntactical errors littered throughout the text. There are abundant instances throughout the book where the diagrams used do not match the text describing them. I was rather disappointed because I usually find that Addison-Wesley publishes some of the best texts on the market.

All in all, I thought this book could have been a lot better than it was. After all, these authors have several other titles under their belt, most notably "Advanced IP Network Design". But in this case, I would say that you are better off looking for other similar titles available on the market, such as Jeff Doyle's "Routing TCP/IP Volume 1" or "The Complete IS-IS Routing Protocol" by Hannes Gredler and Walter Goralski.

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

15Nov/093

An Overview of BGP FlowSpec

I have given this presentation a few times in the last year and was asked to make this available for public consumption. Essentially, this is a brief overview of RFC 5575, entitled “Dissemination of Flow Specification Rules”, written by Danny McPherson, Jared Mauch, and others. This standard had somewhat of a rocky beginning as there was limited vendor support, but as of recently it appears to have picked up quite a bit of steam with Cisco announcing support for the protocol in the very near future. The benefit of BGP Flow Spec is that it allows BGP speakers to use a new BGP NLRI defining flow filter information which can then be advertised to upsteam neighbors via BGP. The primary and immediate motivation of this protocol is to provide intra and inter provider distribution of traffic filtering rules to filter DoS and DDoS attacks, however it can be used for a wide variety of applications in which filtering information must be dynamically distributed throughout a network. I will probably make additional modifications to these slides as the protocol gains more significant foothold throughout the vendor community and as Service Providers gain more practical deployment experience. As with my other presentations, I will eventually add a voice-over to turn this into a slide-cast.

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

13Nov/090

IP Routing Tutorial

I've given this tutorial quite a few times now and several people have asked me to make it publicly available. This is very much geared towards non-technical folks who would like to have a better understanding of how routing in the Internet works. It covers a brief history of the Internet and evolution of dynamic routing protocols, as well as high-level coverage of link-state vs. distance vector IGPs in addition to discussing EGPs and their role in the exchange of routing information between Autonomous Systems. It also has a few slides on QoS, MPLS, and IPv6. I would still like to make some modifications to a few of the slides as well as adding more content around MPLS and the future of IP.  I will also eventually add a voice-over to turn this into a true slide-cast.  In the meantime, I've put it up on my slideshare account for those of you who would like to have access to it.

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati