What’s a Steiner Tree?

Any of you who have worked with VPLS or NG-MVPNs are likely already familiar with using Point-to-Multipoint (P2MP) LSPs to get traffic from a single ingress PE to multiple egress PEs.  The reason that P2MP LSPs are desired in these cases is that it can reduce unnecessary replication by doing so only where absolutely required, for example where a given P2MP LSP must diverge in order to reach two different PEs.

However, typically the sub-LSPs which are part of a given P2MP LSP traverse the shortest-path from ingress to egress based on whatever user defined constraints have been configured.  While this is fine for many applications, additional optimizations might be required such that additional bandwidth savings can be realized.

We will take a look at something called a Steiner-Tree which can help the network operator to realize these additional savings, when warranted, reducing the overall bandwidth used in the network and fundamentally changing the way in which paths are computed.

Let’s start by taking a look at a simple example in which RSVP is used to signal a particular P2MP LSP, but no constraints are defined.  All the links in this network have a metric of 10.  In this case, the sub-LSPs will simply traverse along the shortest path in the network, as can be seen in the diagram below.

Here we see a P2MP LSP where PE1 is the ingress PE and PE2, PE3, and PE4 are all egress nodes.  Since no constraints have been defined the calculated ERO for each of the sub-LSPs will follow along the shortest path where we can see one sub-LSP taking the PE-P1-P2-PE2 path, another is taking the PE1-P1-P3-PE3 path, and the third is taking the PE1-P1-P4-PE4 path.  In this case, each sub-LSP has a total end-to-end cost of 30.

Shortest Tree

Under many circumstances this type of tree would be perfectly acceptable, especially when the end-goal is the minimize end-to-end latency, however there are other cases where we may want to introduce additional hops in an effort to reduce overall bandwidth utilization.  This is where the concept of a minimum-cost tree, otherwise known as a Steiner Tree, comes into play.

This may seem counter-intuitive at first; after all, doesn’t a shortest-path tree attempt to minimize costs?  The answer is yes, but it usually only does so by looking at costs in terms of end-to-end metrics or hops through a network.  Once you understand the mechanics of the Steiner Tree algorithm, and how it attempts to minimize the total number of interconnects, it starts to make more sense.

According to Wikipedia, “the Steiner tree problem, or the minimum Steiner tree problem, named after Jakob Steiner, is a problem in combinatorial optimization, which may be formulated in a number of settings, with the common part being that it is required to find the shortest interconnect for a given set of objects”.

That’s a pretty fancy way of saying it’s attempting to optimize the path to be the shortest path possible while at the same time reducing the total number of interconnects between all devices to only those that are absolutely required.

Steiner Tree optimizations are very useful where an ingress PE must send large amounts of data to multiple PEs and it is preferable to ensure that overall bandwidth utilization is reduced, perhaps because of usage-based billing scenarios which require that overall circuit utilization be reduced as much as possible in order to save money.

Let’s take a look at an example, once again using the same network as before, but this time performing a Steiner Tree optimization whereby cost is measured in terms of overall bandwidth utilization.  In this case we still see that we have the requirement to build the P2MP LSP from PE1 to PE2, PE3, and PE4.  However, this time we are going to compute an ERO such that replication will only take place where absolutely necessary in order to reduce the total number of interconnects and hence overall bandwidth utilization.

After performing a Steiner Tree path computation, we determine that PE3 is a more logical choice to perform the replication to PE2 and PE4, even though it increases the overall end-to-end metric cost to 40.  The reason for this is we have now effectively eliminated the bandwidth utilization on the P1-P2, P2-PE2, P1-P4, and P4-PE4 links.  In effect, we’ve gone from utilizing bandwidth across seven links to only five.  If the P2MP LSP was servicing a 100 Mbps video stream, we have just effectively reduced overall bandwidth utilization on the network as a whole by 200 Mbps.

Steiner Tree

One of the interesting side-effects of this approach is that we now see that PE3 is not only an egress node, but it is now also a transit node as well (for the sub-LSPs terminating at PE2 and PE4).  Due to this, we’ll also see that in these types of scenarios the Penultimate Hop Popping (PHP) behavior is different on P3 in that we don’t want it popping the outer label before sending frames to PE3 since PE3 may need to accommodate labeled packets heading to PE2 or PE3.  We will cover some of this in a subsequent article on the signaling mechanisms inherent in P2MP LSPs and some of the changes to the behavior in MPLS forwarding state.

Path computation for P2MP LSPs can be complex, especially when the goal is create Steiner Trees.  The reason for this added complexity when computing Steiner Trees is that sub-LSP placement has a direct correlation with other sub-LSPs, which is contrary to what happens when shortest-path trees are calculated where each sub-LSP may be signaled along their own unique path without regard to the placement of other sub-LSPs.

As with traditional LSPs, similar methods of determining the paths through the network and hence the ERO can be used, i.e. manual, offline computation.

The easiest approach would be to use constructs like Link Coloring (Affinity Groups for you Cisco wonks) to influence path selection, for example, by coloring the PE1-P1, P1-P3, P3-PE3, PE3-PE2, and PE3-PE4 links with an included color, or coloring the remaining links with a different color and excluding that color from the LSP configuration.

However, this approach is merely a trick.  We are feeding elements into the CSPF algorithm such that the shortest path which is calculated essentially mimics that of a Steiner Tree.  In other words, it’s not a true Steiner Tree calculation because the goal was not to reduce the total number of interconnects, but rather to only utilize links of an included color.

Furthermore, such an approach doesn’t easily accommodate failure scenarios in which PE3 may go down, because even though Fast Reroute or Link/Node Protection may be desired, if the remaining links do not have the included colors they may be unable to compute an ERO for signaling.

Workarounds to this approach are to configure your Fast Reroute Detours or your Link/Node Protection Bypass LSPs to have more relaxed constraints, such that any potential path might be used.  However, more commonly what you’ll see is that some type of additional computations might be performed using traditional offline approaches (using modeling tools such as those provided by vendors such as WANDL, OPNET, or Cariden) which factors both steady-state as well as failure scenarios to assist the operator in determining optimal placement of all elements.

An interesting side-note is that there are some pretty significant developments underway whereby online computation can be performed in such a way as to optimize all P2MP LSPs network-wide, using something known as Path Computation Elements (PCEs).  These are essentially any entity which is capable of performing path computation for any set of paths throughout a network by applying various constraints.  It is something that looks to be especially useful in large carrier networks consisting of many LSPs, and especially so in the case of Steiner Tree P2MP LSPs where the sub-LSP placement is highly dependent on others.  See the charter of the PCE Working Group in the IETF for more information on this and other related developments.

As a side note, it should be fairly evident that in order to perform path optimizations on anything other than shortest-path trees (i.e. Steiner Trees or any other type of tree based on user-defined constraints), RSVP signaling must be used in order to signal a path along the computed ERO.  LDP certainly can be used to build P2MP LSPs (aka mLDP), however much like traditional LSPs built via LDP, the path follows the traditional IGP path.

Stay tuned as we will cover more exciting articles on P2MP LSPs and some of the other underpinnings behind many of the next generation MPLS services being commonly deployed…

Black Hat OSPF Vulnerabilities: Much Ado About Nothing

Imagine a group of researchers planning to speak at a conference regarding a previously undiscovered vulnerability present in most homes that would allow a thief to rob your home of its valuables with complete ease.  You would probably be interested in hearing what they had to say so you could take the necessary precautions to protect your home.

Now imagine when they presented their findings, they went on to state that it was incredibly easy to do, so long as you left your front door open and also provided them with the security code for any alarm systems.  You would probably find this implausible and simply the proliferation of fear, uncertainty, and doubt.

That’s precisely what happened last week at the well-respected Black Hat security conference in Las Vegas when researchers from the Israel Institute of Technology and Advanced Defense Systems, Ltd. presented their findings of a serious vulnerability present in OSPF.  So serious in fact, the researchers stated the only way to properly mitigate the threat, short of fixing the protocol, is to switch to another routing protocol such as RIP or IS-IS.

The researchers went on to present their research of two previously undiscovered methods of poisoning routing information within an OSPF domain such that bogus information could be propagated and used for a multitude of malicious purposes.

The first attack allows for the creation of a remote false adjacency, whereby an unsuspecting victim router goes on to form an adjacency with a phantom router.  Once this adjacency is formed, the attacker can then flood the victim router with bogus Link-State Advertisements (LSAs).

The second attack allows the flooding of what are known as Disguised LSAs, whereby a victim router would accept spoofed LSAs which it believed were generated by another router within the OSPF domain.

While both of these previously unknown attack vectors might raise some eyebrows and create cause for concern, the researchers made some pretty big assumptions regarding the ability to perform such an attack.  The assumptions were that in order to perform such an attack, one would first need to insert a router into the network, and once this was accomplished they would then somehow need to obtain the MD5 authentication key that is widely used to secure the communications channel between OSPF neighbors.

Without a doubt, there is some merit to what these researchers have found and to a protocol junkie like myself I am always interested to learn the inner-workings of these protocols.  However, what they’ve completely overlooked is the first tenet of information security which is to ensure the physical environment is properly secured.  In most environments, gaining physical access to the network might prove difficult enough.  Nonetheless, if an attacker were able to insert such a device, the net effect would be that such attacks would be short-lived as it would be fairly trivial for a network administrator to isolate the offending device and remove it from the network.

Furthermore, and perhaps the biggest barrier to entry, they’ve failed to recognize that the likelihood of someone obtaining the authentication key.  While MD5 is known to have some serious deficiencies and vulnerable to collision attacks, it still represents a pretty high bar for the would-be attacker.  And various vendors are already rolling out HMAC-SHA for the authentication function within OSPFv2.

In addition, the researchers admitted that in their testing of the first exploit, the victim router tore down the adjacency to the phantom router after 125 seconds because the victim router did not receive the proper acknowledgement packets from the phantom router.  Once again, the net effect is that if the attackers were able to meet the first two assumptions, their attack would still be relatively short-lived.

While OSPF is widely used throughout the Internet as a whole, you probably have better odds of winning the lottery than actually experiencing an attack such as these researchers have demonstrated.  The researchers presented some pretty bold claims, but at the end of the day didn’t really present anything that wasn’t already widely known about the vulnerabilities inherent within the OSPF protocol given the assumptions the researchers made.  Their assumptions are based on two barriers to entry, which doesn’t make it impossible, but nonetheless incredibly difficult and highly implausible.  In my experience, attackers tend to move onto far easier targets.

There are a lot of things to worry about when it comes to protecting your corporate assets, but in my humble opinion, this is not one of them.  So rather than ripping out OSPF as these researchers would have you believe, you might perhaps be better served by taking a second look at your environment and ensuring the physical premises are properly secured.

JNCIE Tips from the Field :: Summarization Made Easy

Today we’ll start with a series of articles covering tips and techniques that might be utilized by JNCIE candidates, whether pursuing the JNCIE-SP, JNCIE-ENT, or even the JNCIE-SEC.  The tips and techniques I will be covering might prove to be useful during a lab attempt but could also be used in real-world scenarios to save time and minimize configuration burden in addition to eliminating mistakes that might otherwise be made.  I want everyone to understand that what I am about to write is simply a technique.  I am not divulging any materials or topics which are covered under NDA.

Continue reading “JNCIE Tips from the Field :: Summarization Made Easy”

IETF Provides New Guidance on IPv6 End-Site Addressing

I’ve always been at odds with the recommendation in RFC 3177 towards allocating /48 IPv6 prefixes to end-sites.  To me this seemed rather short-sighted, akin to saying that 640K of memory should be enough for anybody.  It’s essentially equivalent to giving out /12s in the IPv4 world which in this day and age might seem completely ridiculous, but let us not forget that in the early days of IPv4 it wasn’t uncommon to get a /16 or even a /8 in some cases.

Granted, I know there are quite a few more usable bits in IPv6 than there are in IPv4, but allocating huge swaths of address space simply because it’s there and we haven’t thought of all the myriad ways it could be used in the future just seems outright wasteful.

So you can imagine my surprise and also my elation last week when the IETF published RFC 6177 entitled ‘IPv6 Address Assignment to End Sites‘.  In it, the general recommendation of allocating /48s to end-sites that has long been the defacto standard since the original publication of RFC 3177 in 2001 has finally been reversed.

It seems that sanity has finally prevailed and the IAB/IESG have decided to take a more pragmatic approach towards address allocation in IPv6.  The recommendations in RFC 6177 attempt to balance the conservation of IPv6 addresses while at the same time continuing to make it easy for IPv6 adopters to get the address space that they require without requiring complex renumbering and dealing with other scaling inefficiencies in the long term.  It is clear that acting too conservatively and allocating very small address spaces could act as a disincentive and possibly stifle widespread adoption of IPv6.

The new current recommendations for address allocations are as follows:

  • /48 in the general case, except for very large subscribers
  • /64 when it is known that one and only one subnet is needed by design
  • /128 when it is absolutely known that one and only one device is connecting

It goes on to state other recommendations and offers guidance to operators with regards to when to allocate certain prefix lengths.  But essentially, what this means is that now individual network operators have more options regarding which prefix size to allocate, and allows them to move away from strict general guidelines.  In essence, operators make the decision as to what prefix size to allocate based on an analysis of the needs of particular customers.

Perhaps this practical conservation may never be needed given the trillions of address space available in IPv6, but maybe, just maybe, in the very distant future if IPv6 is still in widespread use, it could very well be due to some of these recommendations being put in place today.  After all, 640K did turn out to be a rather small number didn’t it?

IPv4 Address Exhaustion Causing Harmful Effects on the Earth

Today, I received a very disturbing email on NANOG which was forwarded from a recipient on the Global Environment Watch (GEW) mailing list.  If this is true, we all need to take steps to make an orderly and smooth transition to IPv6 as quickly as possible, lest we suffer from the harmful effects described in this email.



Bluehost IPv6 Epic Fail

Recently, I had a conversation with my hosting provider to determine if they had IPv6 support.  I’m interested in getting my web site set up and reachable via IPv6.  Below is a copy of the conversation I had with their customer support, and clearly indicates we’ve got a long way before IPv6 is ready for the masses:

stupid
Bluehost Support: Hi, how can I help you?
Stefan: Hi, I am a hosting customer and I was wondering if you currently have support for IPv6?
Bluehost Support: Let me look into that for you.
Bluehost Support: Yes, we do support IPv6.
Stefan: Great! Is there a cost associated with that and how do I go about setting that up?
Bluehost Support: I am sorry, we cannot give you IPv6 until our IP5 runs out.
Stefan: Wait, you just told me you have support for IPv6.  What the heck is IP5?
Bluehost Support: IP5 is the version before IP6.  We can’t give you an IP6 until our IP5 runs out. I am sorry for the misunderstanding.

Interview with Chris Grundemann, Author of ‘Day One: Exploring IPv6’

Spend a little time in Juniper, ARIN or a wide variety of other networking forums, and you’ll likely see the name Chris Grundemann.  Recently, we had the opportunity to catch up with him, and discuss the nature of his involement in deploying IPv6 at tw telecom, as well as his recently published Juniper booklet entitled “Day One: Exploring IPv6“.


Thanks Chris for joining us today.  Tell us a little bit about yourself and your career experience, and specifically tell us about your day-to-day experience working with IPv6.

Certainly. Career-wise, I am currently engaged as a Network Architect with tw telecom inc. where I am responsible for setting forward looking architectures and leading various technology development efforts. I am also the Founding Chair of the Colorado Chapter of the Internet Society, Founding Editor of Burning with the Bush and an active participant (and current AC nominee) in the ARIN policy development process. Obviously I am also the author of the Juniper “Day One: Exploring IPv6” booklet.

My day-to-day experience with IPv6 is actually pretty minimal at this point. Last year while I was still on the IP backbone team here at tw telecom, I rolled out IPv6 across all of our PE routers – in one night. Since then, there has been very little technical work needed from a networking perspective. We still have plenty of work to fully operationalize IPv6 but it is mostly systems and process issues now, much less exciting.

For any readers who are interested, you can find a lot more about me on my personal site. This includes links to my Facebook and LinkedIn profiles, so feel free to send me an invite to connect!

You rolled out IPv6 across all of your PE routers in a single night! That’s a pretty big accomplishment. Would you say that Juniper’s implementation of IPv6 made it easy to deploy and support IPv6 across a large number of devices?

Thanks! There was of course plenty of preparation leading up to that night, but we “flipped the switch” all at once and it went extremely smooth.

All of Juniper’s carrier routers forward IPv6 in hardware, which is huge. Also, IPv6 was integrated into Junos very well, most of the commands are similar if not the same between IPv4 and IPv6. This makes it really easy operationally speaking. So, yes, I would definitely agree that Juniper’s implementation of IPv6 makes it easy to deploy and scale.

Ok, so let’s specifically talk about the current state of affairs with IPv6.  Hurricane Electric, one of the leading providers of IPv6 connectivity, states that as of the time of this writing we have less than a year remaining until complete IPv4 exhaustion.  This is based on the fact that there are only sixteen /8 network blocks available for allocation (approximately 6%).    We’ve heard figures such as this for many years now, but techniques like NAT have allowed people to extend the length of the existing IPv4 address pool.  Based on your experience working with IPv6 and also your involvement with ARIN, can you help us to understand what is fact and what is fiction – how long do you really think we have before total address exhaustion becomes a reality and customers will have no choice but to start looking at IPv6 for future deployments?

Let me re-phrase your query into two distinct questions if I may: How long do we have with IPv4 and when will network operators be forced to consider IPv6 deployment? The answers are very different so I think they should be addressed individually.

First, How long do we have with IPv4? As you state, Hurricane Electric’s widget gives us less than a year. But let’s start with a quick level-set. There are actually three distinct points leading up to what I would call “complete IPv4 exhaustion.” The first is IANA unallocated pool exhaustion. This is the point when the global pool of IPv4 /8s designated for unicast routing reaches 5 remaining and subsequently each of the 5 RIRs receives one (thus depleting the unallocated pool completely). The second point is RIR exhaustion, when the Regional Internet Registries can no longer allocate nor assign IPv4 addresses that they received from IANA (because they don’t have any). Finally, true exhaustion happens when the ISPs/LIRs exhaust their remaining IPv4 addresses and end users simply cannot get a routable IPv4 address.

As I understand it, Hurricane Electric is getting their data from the IPv4 Address Report built by Geoff Huston and are predicting the date of the first point; exhaustion of the IANA IPv4 unallocated address pool. As of today that date is projected to be 1 July, 2011 – less than a year away. However, this projection is based on the current and historical run-rate, on how fast we have consumed IPv4 addresses up to this point. Because so many folks have not paid attention to IPv6 and are still wholly dependent on IPv4, it is quite likely that the run-rate will increase, perhaps drastically, as we get closer to IANA unallocated pool exhaustion. If this happens, we actually have much less than one year before reaching that first point.

Predicting the second point gets a little murkier, because different folks define this point differently. Should we declare that RIR exhaustion is upon us when the first RIR runs out of unallocated IPv4 address space? When the last one does? Perhaps when the RIR for your region has no unallocated IPv4 to give you? Mr. Huston projects the date “where the first RIR has exhausted its available pool of addresses” and since he has already done all the work, it is a convenient place to set the bar. As of today that date is predicted to be 20 January, 2012. Remember again that this does not take into account any possible run on IPv4 addresses that may happen between now and then and that other RIRs will have IANA allocated IPv4 space for some time after that date.

The final point is the hardest one to pin down. This is mostly because it would be very hard, if not impossible to quantify how much currently allocated/assigned address space is unused or underused.

Many ISPs may be able to feed off of current reserves for months or even years, while many more will run out of IPv4 addresses within weeks of receiving their last traditional allocation from their RIR.

You also have to take into account things like IPv4 address transfers which are now allowed in many regions, other possible policy changes and transition technologies such as carrier-grade-NAT (CGN). All of these things pull IPv4 use in different directions. So no one can intelligently predict this final date.

Although I cannot tell you that IPv4 will be dead in X years, there are some very important facts that we should not overlook. The first is that Geoff Huston’s projections have remained quite consistent over the past two years, and the time remaining has steadily decreased over those two years. The second is that we are running out of usable IPv4 addresses. NAT was a stop gap to allow folks time to adopt IPv6. That time has largely been wasted unfortunately. The bottom line is that IPv4 will continue to become more expensive to use on interconnected networks while IPv6 continues to become less expensive.

This is where the second question comes into play: When will network operators be forced to look at IPv6 deployment? The truth is that they should be looking into it now. If you are not adding only IPv6 capable hardware and software to your network now – you are going to be forced to spend extra money upgrading sooner than you would like. As IPv4 becomes ever more expensive (both directly as ISPs charge more for it or you are forced to pay for addresses through a transfer and indirectly as CGNs and other transition mechanisms drive up operational costs), many will turn to IPv6 – more and more as the next two years play out. Businesses that have IPv6 capable networks now will have a competitive advantage over those who are forced to upgrade their network to get IPv6 connectivity.

An often overlooked aspect of this question is security. If your network is not IPv6 enabled today, you likely have IPv6 traffic being tunneled right through your firewalls. Another is mobile access – very soon mobile phone operators will be migrating to 4G technologies that take advantage of IPv6 addressing for all new phones on their networks. These IPv6 mobile devices will be reaching your website(s) via IPv6, if you want them to have the best possible experience your site needs to be running IPv6 natively.  As soon as a website is IPv6 only, ISPs will be required to provide IPv6 connectivity or lose customers to those who do.

So, in short, the answer really is now. Everyone should be thinking of IPv6 when planning all future network deployments, starting now (if not yesterday).

Many industry experts are already speculating that an IPv4 black market will exist because of the depletion of IPv4 address space and the lack of a large IPv6 installed base. Do you suspect there will be a black market for IPv4 addresses and what impacts might this have?

The answer varies a bit depending on how you choose to define black market. Under many definitions, that market already exists. I think that it already impacts us and that it will get worse as we near and ultimately cross the free pool depletion threshold. Think spammers and phishers operating out of address blocks that they beg, borrow, steal and often “rent” or “buy.” There are also instances where much more legitimate businesses make back-room deals for the use of IP addresses.

Overall some of the most negative impacts surround the WHOIS database, and the integrity of the data it contains. When folks get addresses through grey or black markets, instead of from the RIR they are probably not going to report proper reassignment registration information to the RIR. This leads to stale WHOIS data which makes troubleshooting and abuse reporting much harder for operators and investigation much harder for law enforcement. I helped author a recently adopted ARIN policy change to start addressing some of this and am actually spearheading an effort to continue that work with another policy proposal in the ARIN region as we speak.

Another concern is prefix hijacking, this is not really a black market issue but is another facet of the problems we will face more and more as IPv4 gets more expensive, unless and until IPv6 adoption picks up across the board.

There is a lot of work going on right now within ARIN, the IETF and other RIRs to try and limit the impacts of any IPv4 black market, other abuses and also ease the overall IPv4-IPv6 transition. Anyone interested in this work should join the ARIN-PPML (Public Policy Mailing List) [http://lists.arin.net/mailman/listinfo/arin-ppml] (or their local equivalent) and/or show up at a meeting; and join the conversation. ARIN and the other RIRs are open, transparent, ground-up organizations and your voice can make a huge impact.

It has been observed that the number of Autonomous Systems supporting IPv6 as well as IPv6 DNS Queries, in the form of AAAA records, have significantly increased in the last several years.  Have we reached that critical mass where widespread adoption is imminent and if so what can we expect to see in the next few years?

I think that widespread adoption is imminent now but I don’t believe that it is an issue of critical mass, more an issue of network operators starting to see the dates we discussed above nearing. I make the distinction because there is still very little IPv6 traffic actually flowing. What I think is happening is that the folks who have been watching this are getting a sense of urgency that is finally breaking through to the folks who write the checks. The real critical mass moment will be when we see IPv6 traffic levels really climbing and IPv4 traffic growth starting to slow. I think you can expect to see that in the next few years. Within five certainly and probably within three.

Let’s talk for a minute about the Juniper “Day One” guides.  Can you tell us what they are all about, and more specifically, tell us a little bit about the “Day One: Exploring IPv6” guide that you’ve written.  What is it all about and what can potential readers hope to gain from reading it?

The Day One guides are exactly what the name implies, they are booklets that give you everything you need to know to get through your first day working on the covered topic. They are hands-on, example-driven, cut-to-the-chase primers on all sorts of topics surrounding Juniper Networks gear and the Junos OS. In “Exploring IPv6” I tried to really provide that common sense starting point for implementing IPv6. The booklet covers enabling IPv6, adding IPv6 addresses to interfaces, configuring static routes in IPv6, implementing IPv6 IGPs (RIPng, IS-IS and OSPF v3) as well as all the basic verification and troubleshooting that surrounds those topics. If you follow along through the book examples and work through all of the “try it yourself” exercises, you should gain a solid understanding of IPv6 LANs and how IPv6 is implemented in JUNOS, as well as a great general / vendor-agnostic view of IPv6 itself and how it differs from IPv4.

Tell us a little bit about the “Advanced IPv6” Day One Guide that you are currently working on?  What should the network practitioner hope to gain from reading it?

Advanced IPv6 is kind of a “Day Two” guide on IPv6 in JUNOS. It continues right where Exploring IPv6 left off and moves onto more advanced topics such as BGP, VRRP, CoS, Multicast and system management. It takes you another big step towards being able to fully implement IPv6 in a production environment.

After you are done with “Advanced IPv6” do you have any other writing aspirations?

I do. Writing is definitely work but I am finding that it’s work I really enjoy. Hopefully others like my writing in these Day One booklets and that gives me the opportunity to continue writing!

Thanks for joining us today Chris.  This has been extremely informative and we are all really excited about reading your next Day One guide and are anxiously awaiting its arrival!