ShortestPathFirst Network Architecture and Design, and Information Security Best Practices

21Sep/114

What’s a Steiner Tree?

Any of you who have worked with VPLS or NG-MVPNs are likely already familiar with using Point-to-Multipoint (P2MP) LSPs to get traffic from a single ingress PE to multiple egress PEs.  The reason that P2MP LSPs are desired in these cases is that it can reduce unnecessary replication by doing so only where absolutely required, for example where a given P2MP LSP must diverge in order to reach two different PEs.

However, typically the sub-LSPs which are part of a given P2MP LSP traverse the shortest-path from ingress to egress based on whatever user defined constraints have been configured.  While this is fine for many applications, additional optimizations might be required such that additional bandwidth savings can be realized.

We will take a look at something called a Steiner-Tree which can help the network operator to realize these additional savings, when warranted, reducing the overall bandwidth used in the network and fundamentally changing the way in which paths are computed.

Let's start by taking a look at a simple example in which RSVP is used to signal a particular P2MP LSP, but no constraints are defined.  All the links in this network have a metric of 10.  In this case, the sub-LSPs will simply traverse along the shortest path in the network, as can be seen in the diagram below.

Here we see a P2MP LSP where PE1 is the ingress PE and PE2, PE3, and PE4 are all egress nodes.  Since no constraints have been defined the calculated ERO for each of the sub-LSPs will follow along the shortest path where we can see one sub-LSP taking the PE-P1-P2-PE2 path, another is taking the PE1-P1-P3-PE3 path, and the third is taking the PE1-P1-P4-PE4 path.  In this case, each sub-LSP has a total end-to-end cost of 30.

Shortest Tree

Under many circumstances this type of tree would be perfectly acceptable, especially when the end-goal is the minimize end-to-end latency, however there are other cases where we may want to introduce additional hops in an effort to reduce overall bandwidth utilization.  This is where the concept of a minimum-cost tree, otherwise known as a Steiner Tree, comes into play.

This may seem counter-intuitive at first; after all, doesn't a shortest-path tree attempt to minimize costs?  The answer is yes, but it usually only does so by looking at costs in terms of end-to-end metrics or hops through a network.  Once you understand the mechanics of the Steiner Tree algorithm, and how it attempts to minimize the total number of interconnects, it starts to make more sense.

According to Wikipedia, "the Steiner tree problem, or the minimum Steiner tree problem, named after Jakob Steiner, is a problem in combinatorial optimization, which may be formulated in a number of settings, with the common part being that it is required to find the shortest interconnect for a given set of objects".

That's a pretty fancy way of saying it's attempting to optimize the path to be the shortest path possible while at the same time reducing the total number of interconnects between all devices to only those that are absolutely required.

Steiner Tree optimizations are very useful where an ingress PE must send large amounts of data to multiple PEs and it is preferable to ensure that overall bandwidth utilization is reduced, perhaps because of usage-based billing scenarios which require that overall circuit utilization be reduced as much as possible in order to save money.

Let's take a look at an example, once again using the same network as before, but this time performing a Steiner Tree optimization whereby cost is measured in terms of overall bandwidth utilization.  In this case we still see that we have the requirement to build the P2MP LSP from PE1 to PE2, PE3, and PE4.  However, this time we are going to compute an ERO such that replication will only take place where absolutely necessary in order to reduce the total number of interconnects and hence overall bandwidth utilization.

After performing a Steiner Tree path computation, we determine that PE3 is a more logical choice to perform the replication to PE2 and PE4, even though it increases the overall end-to-end metric cost to 40.  The reason for this is we have now effectively eliminated the bandwidth utilization on the P1-P2, P2-PE2, P1-P4, and P4-PE4 links.  In effect, we've gone from utilizing bandwidth across seven links to only five.  If the P2MP LSP was servicing a 100 Mbps video stream, we have just effectively reduced overall bandwidth utilization on the network as a whole by 200 Mbps.

Steiner Tree

One of the interesting side-effects of this approach is that we now see that PE3 is not only an egress node, but it is now also a transit node as well (for the sub-LSPs terminating at PE2 and PE4).  Due to this, we'll also see that in these types of scenarios the Penultimate Hop Popping (PHP) behavior is different on P3 in that we don't want it popping the outer label before sending frames to PE3 since PE3 may need to accommodate labeled packets heading to PE2 or PE3.  We will cover some of this in a subsequent article on the signaling mechanisms inherent in P2MP LSPs and some of the changes to the behavior in MPLS forwarding state.

Path computation for P2MP LSPs can be complex, especially when the goal is create Steiner Trees.  The reason for this added complexity when computing Steiner Trees is that sub-LSP placement has a direct correlation with other sub-LSPs, which is contrary to what happens when shortest-path trees are calculated where each sub-LSP may be signaled along their own unique path without regard to the placement of other sub-LSPs.

As with traditional LSPs, similar methods of determining the paths through the network and hence the ERO can be used, i.e. manual, offline computation.

The easiest approach would be to use constructs like Link Coloring (Affinity Groups for you Cisco wonks) to influence path selection, for example, by coloring the PE1-P1, P1-P3, P3-PE3, PE3-PE2, and PE3-PE4 links with an included color, or coloring the remaining links with a different color and excluding that color from the LSP configuration.

However, this approach is merely a trick.  We are feeding elements into the CSPF algorithm such that the shortest path which is calculated essentially mimics that of a Steiner Tree.  In other words, it's not a true Steiner Tree calculation because the goal was not to reduce the total number of interconnects, but rather to only utilize links of an included color.

Furthermore, such an approach doesn't easily accommodate failure scenarios in which PE3 may go down, because even though Fast Reroute or Link/Node Protection may be desired, if the remaining links do not have the included colors they may be unable to compute an ERO for signaling.

Workarounds to this approach are to configure your Fast Reroute Detours or your Link/Node Protection Bypass LSPs to have more relaxed constraints, such that any potential path might be used.  However, more commonly what you'll see is that some type of additional computations might be performed using traditional offline approaches (using modeling tools such as those provided by vendors such as WANDL, OPNET, or Cariden) which factors both steady-state as well as failure scenarios to assist the operator in determining optimal placement of all elements.

An interesting side-note is that there are some pretty significant developments underway whereby online computation can be performed in such a way as to optimize all P2MP LSPs network-wide, using something known as Path Computation Elements (PCEs).  These are essentially any entity which is capable of performing path computation for any set of paths throughout a network by applying various constraints.  It is something that looks to be especially useful in large carrier networks consisting of many LSPs, and especially so in the case of Steiner Tree P2MP LSPs where the sub-LSP placement is highly dependent on others.  See the charter of the PCE Working Group in the IETF for more information on this and other related developments.

As a side note, it should be fairly evident that in order to perform path optimizations on anything other than shortest-path trees (i.e. Steiner Trees or any other type of tree based on user-defined constraints), RSVP signaling must be used in order to signal a path along the computed ERO.  LDP certainly can be used to build P2MP LSPs (aka mLDP), however much like traditional LSPs built via LDP, the path follows the traditional IGP path.

Stay tuned as we will cover more exciting articles on P2MP LSPs and some of the other underpinnings behind many of the next generation MPLS services being commonly deployed.

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

12Sep/1116

Preparation Tips for the JNCIE-SEC Exam

Not a day that goes by since having passed the JNCIE-SEC exam that I don't receive an inquiry in one form or another regarding how I prepared for the exam.  It seems that there is an incredible amount of interest in this exam, especially from all those die-hard ScreenOS folks that are now converting to Junos.  So instead of constantly repeating myself, I figured I'd just put it up on the blog so others can benefit (leaving me more time to do other things, 'heh).

NOTE: For full disclosure, I must reveal that I am an Technical Trainer and Certification Proctor at Juniper Networks.  As such, I take EXTRA responsibility towards protecting the content and integrity of the exam and I take the certification credentials very seriously.  Not only that, I worked REALLY, REALLY hard to achieve my JNCIE certifications, and I believe everyone else should too! As such, I kindly ask that candidates refrain from asking me questions which would be considered a violation of the NDA.  Also, I should add that although I work for Juniper, the viewpoints expressed in this article are my own and may not necessarily be shared by my employer.

Let's first start by looking at the exam objectives and then we will move on to the materials I used for preparation and the hardware requirements for building out a lab which would provide for sufficient preparation.

Exam Objectives

Detailed exam objectives are listed on Juniper's JNCIE-SEC Exam Objectives certification page.  Familiarize yourself with these objectives and try to focus your study towards mastering all of these objectives.  Learn to read between the lines to identify if additional subject matter might need to be explored for full preparation.

In Junos there are typically more than one way to accomplish a given task so you would be wise to learn all the different ways of accomplishing a goal to achieve complete mastery of the subject matter.  For example, can you accomplish bidirectional address translation similar to Static NAT by instead using Source NAT and Destination NAT?  What are the benefits and caveats of each approach?

The current Junos software release that is used throughout the exam is Junos 11.1.  A quick glance through the release notes may be useful to familiarize yourself with some of the new features introduced in this version.

Study Materials

First and foremost, you are going to want to get your hands on the official Juniper courseware for all the requisite curriculum listed under the Junos Security track.  Specifically the following:

If you are unable to attend all of these courses in person, one of the cool things is that Juniper now lets you purchase the course materials for self-study purposes.  Basically you get access to everything that you would normally receive in the class, minus the instructor and access to the lab gear of course.

NOTE: While it is possible to order the materials for self-study, I strongly advocate taking the actual training if you can do so as the instructors tend to augment the subject matter with additional details, first-hand observations and experience not normally found in the materials.  Furthermore, as is the case in classes I normally teach, we tend to reveal tips and techniques which might be useful in certification attempts.

To augment the above, I would highly advise reading the book 'Junos Security' by Rob Cameron, Brad Woodberg, Patricio Giecco, Tim Eberhard, and James Quinn.  I'll be writing a review of this book in a subsequent post but for now I can't overemphasize how important this book was in my preparations. In fact, I would advise reading it twice for good measure.  There is a lot of good coverage in this book.  The majority of what you can expect to see in the exam is covered in this book, and what might be missing is adequately covered in the official courseware material.

I would also suggest making note of the links below.  You would be well advised to make use of both of these links during your preparation.  The first link is the JumpStation to a wide variety of SRX knowledge base articles and the second link provides detailed coverage on configuring High Availability across a number of different SRX platforms.  Familiarize yourself with the subtle differences in HA configuration across all the different platforms as you don't want your first time to be exposed to these differences to be during an examination attempt.

Before moving on to the lab setup, I want to mention that we will be offering JNCIE-SEC bootcamps sometime in the future.  Although there is currently no committed date for such an offering, when available you will get in-depth coverage of the types of topics you will expect to see on the exam in addition to a simulated lab on the final day of class.  Stay tuned for more information regarding our bootcamp offerings on Juniper's Learning Portal.

Lab Buildout

A common question asked throughout the forums is what type of lab setup is required for adequate preparation.  I can tell you that I personally prepared with only two SRX210s and single SRX100 device, but it slowed down my preparations immensely due to constantly having to rearrange and reconfigure the lab setup to accommodate different topologies (hub-and-spoke vs. full-mesh, clustered vs. non-clustered, etc.).  If you can spring for it, I would say purchase as many devices as you possibly can so you can build out a clustered SRX while leaving others as standalone and build complex VPN topologies.  This way you can spend more of your time learning new features rather than having to rearrange your lab setup.

amtrak_labOne of the benefits of having the smaller branch devices is that they are fairly portable.  In fact, as seen in the picture to the left, I was able to set up my lab during a trip from DC to New York on an Amtrak train in business class (although others did give me funny looks).  As you can see, even during a 3 hour trip, I was able to make use of this time for study preparations.

I would also strongly advise purchasing at least one device with the High Memory option as this will let you run the full gamut of IPS and UTM capabilities, assuming you've got the licenses.  Speaking of licenses, you can acquire trial licenses from Juniper which are valid for a period of 4 weeks, so I would advise holding off on activating these until you are completely ready.  Trial licenses are tied to a devices serial number, and although they are only valid for a period of 4 weeks, you can fetch a trial license once per year for each device serial number.

You can find SRX devices on eBay for as little as a few hundred dollars a piece, so building out a lab doesn't have to break the bank.  And the cool thing is that when you are done you can resell them for a fair market value so in the long term you really shouldn't have to spend that much getting a decent lab built out.

Once you have your lab completely set up, I would strongly advise going through all the labs in the official courseware as these are indicative of the types of things you will likely see on the exam.  Unlike JNCIE-ENT and JNCIE-SP, in this lab it really helps to have incorporated some type of client and server throughout the topology so that various features such as NAT and Stateful Firewall Policy can be properly tested.  In lieu of this, and with a bit of creative license, you could actually use one of your SRX platforms with a few Virtual Routers configured to simulate both clients and servers, connected to the Trust ports on the other devices throughout your topology.  This won't give you the same parity as having access to real Clients and Servers, but the idea is to be able to generate sufficient flows to properly trigger things like NAT rules or firewall policy.  A lot can be simulated by simply using 'telnet' and specifying the destination-port required to trigger a particular rule on a downstream device.

Final Notes

A question most often asked is how long should it take to prepare.  The answer to that question really depends on your Junos experience level and background.  If you already have previous working experience with Junos or a JNCIE, I would expect about 4-6 months should be sufficient for adequate preparation.  Otherwise if you are new to Junos or transitioning over from ScreenOS, I wouldn't even suggest starting exam preparations until you've had at least 1-2 years experience working with Junos and the SRX platforms.

Overall, this might seem like a long time but you'd be amazed at how quickly a few months can go by - if you can carve out even just an hour each day over the course of several months you will be infinitely better served than having to do a bunch of cramming in the last few weeks before your exam.  Remember, slow and steady wins the race here... it's a marathon, not a 100-meter dash.

Last but not least, and this may seem a bit silly but it is really important to try to get to bed early on the night of the exam and get a decent nights rest.  If you're not adequately prepared the night before the exam, cramming all night isn't going to do you any good.  Also, wake up early enough to ensure you can get a good breakfast.  Based on personal experience I can tell you that this makes a big difference.  I strongly advise oatmeal since it's low on the Glycemic Index and will give you a slow steady release of energy throughout the morning - the perfect way to ensure your mind is focused and you don't have any of those mid-morning dips in energy levels or mental acuity.

A little tidbit that not many folks are aware of - you can bring your own keyboards when you sit the exam as you might find the keyboards we provide to be difficult to use.  This is one of those little things that can really make a difference when you are used to running all those EMACS command sequences on a keyboard you are familiar with.

I will be proctoring this exam so for those of you attempting to sit the exam in our Herndon office, I look forward to meeting you and wish you the best in your upcoming attempt.  With a little bit of luck and a lot of preparation, you may find success and achieve the highly sought-after JNCIE-SEC designation.  Good luck and may the force be with you!

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

11Aug/119

Black Hat OSPF Vulnerabilities: Much Ado About Nothing

Imagine a group of researchers planning to speak at a conference regarding a previously undiscovered vulnerability present in most homes that would allow a thief to rob your home of its valuables with complete ease.  You would probably be interested in hearing what they had to say so you could take the necessary precautions to protect your home.

Now imagine when they presented their findings, they went on to state that it was incredibly easy to do, so long as you left your front door open and also provided them with the security code for any alarm systems.  You would probably find this implausible and simply the proliferation of fear, uncertainty, and doubt.

That's precisely what happened last week at the well-respected Black Hat security conference in Las Vegas when researchers from the Israel Institute of Technology and Advanced Defense Systems, Ltd. presented their findings of a serious vulnerability present in OSPF.  So serious in fact, the researchers stated the only way to properly mitigate the threat, short of fixing the protocol, is to switch to another routing protocol such as RIP or IS-IS.

The researchers went on to present their research of two previously undiscovered methods of poisoning routing information within an OSPF domain such that bogus information could be propagated and used for a multitude of malicious purposes.

The first attack allows for the creation of a remote false adjacency, whereby an unsuspecting victim router goes on to form an adjacency with a phantom router.  Once this adjacency is formed, the attacker can then flood the victim router with bogus Link-State Advertisements (LSAs).

The second attack allows the flooding of what are known as Disguised LSAs, whereby a victim router would accept spoofed LSAs which it believed were generated by another router within the OSPF domain.

While both of these previously unknown attack vectors might raise some eyebrows and create cause for concern, the researchers made some pretty big assumptions regarding the ability to perform such an attack.  The assumptions were that in order to perform such an attack, one would first need to insert a router into the network, and once this was accomplished they would then somehow need to obtain the MD5 authentication key that is widely used to secure the communications channel between OSPF neighbors.

Without a doubt, there is some merit to what these researchers have found and to a protocol junkie like myself I am always interested to learn the inner-workings of these protocols.  However, what they've completely overlooked is the first tenet of information security which is to ensure the physical environment is properly secured.  In most environments, gaining physical access to the network might prove difficult enough.  Nonetheless, if an attacker were able to insert such a device, the net effect would be that such attacks would be short-lived as it would be fairly trivial for a network administrator to isolate the offending device and remove it from the network.

Furthermore, and perhaps the biggest barrier to entry, they've failed to recognize that the likelihood of someone obtaining the authentication key.  While MD5 is known to have some serious deficiencies and vulnerable to collision attacks, it still represents a pretty high bar for the would-be attacker.  And various vendors are already rolling out HMAC-SHA for the authentication function within OSPFv2.

In addition, the researchers admitted that in their testing of the first exploit, the victim router tore down the adjacency to the phantom router after 125 seconds because the victim router did not receive the proper acknowledgement packets from the phantom router.  Once again, the net effect is that if the attackers were able to meet the first two assumptions, their attack would still be relatively short-lived.

While OSPF is widely used throughout the Internet as a whole, you probably have better odds of winning the lottery than actually experiencing an attack such as these researchers have demonstrated.  The researchers presented some pretty bold claims, but at the end of the day didn't really present anything that wasn't already widely known about the vulnerabilities inherent within the OSPF protocol given the assumptions the researchers made.  Their assumptions are based on two barriers to entry, which doesn't make it impossible, but nonetheless incredibly difficult and highly implausible.  In my experience, attackers tend to move onto far easier targets.

There are a lot of things to worry about when it comes to protecting your corporate assets, but in my humble opinion, this is not one of them.  So rather than ripping out OSPF as these researchers would have you believe, you might perhaps be better served by taking a second look at your environment and ensuring the physical premises are properly secured.

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

7Jul/116

Juniper SRX Tips :: Altering Default-Deny Behavior

In our previous article, we looked at using apply-groups to alter all the security policies uniformly on an SRX device such that they would all have an implicit logging statement. And while this is fine for all existing policies, it doesn't log traffic which doesn't match any explicitly defined security policy.

The reason for this is due to the fact that in Junos, traffic which doesn't match an explicitly defined security policy matches against the default-deny policy.  However, given the fact that the default-deny policy is implicitly defined, apply-group configurations are of little benefit as apply-groups can only be inherited by those elements which have been explicitly defined.

Often in these cases, administrators will simply choose to create their own deny policies with the desired options and place this deny policy as the last policy for traffic going from one zone to another. However, in instances where there are many zones, it might prove too cumbersome and time consuming to manually configure this to accommodate all zones.

Clearly it would be more beneficial to have something akin to the Global Zone in ScreenOS which can be used to match on all traffic which doesn't match against any of the explicitly defined security policies.  However, at the time of this writing, Global Zone functionality doesn't exist in Junos.

The good news is that we can use the power of apply-groups once again to our benefit, this time to create an explicitly defined deny policy which will be inherited at the tail-end of all security policies defined within our configuration. Note that this will encompass both Inter-zone as well as Intra-zone traffic.

For this example, let's assume that we want to log everything that would normally hit the default-deny policy. Let's start by taking a look at our baseline configuration:

root@ce-1# show security policies
from-zone Trust to-zone Untrust {
    policy allow-outbound {
        match {
            source-address Users-subnet;
            destination-address any;
            application any;
        }
        then {
            permit;
        }
    }
}
from-zone Untrust to-zone Trust {
    policy allow-web {
        match {
            source-address any;
            destination-address web-server;
            application junos-http;
        }
        then {
            permit;
        }
    }
}

Here you can see we have a policy allowing all traffic outbound from the Users-subnet in the Trust zone towards the Untrust zone, and another policy allowing inbound HTTP traffic from the Untrust zone towards the Web Server in the Trust zone.  Now, in order to change the default-deny behavior and add additional options, we will use an apply-group to inherit a new policy at the tail-end of all previously defined policies, as follows:

groups {
    default-log {
        security {
            policies {
                from-zone <*> to-zone <*> {
                    policy log-all-else {
                        match {
                            source-address any;
                            destination-address any;
                            application any;
                        }
                        then {
                            deny;
                            log {
                                session-init;
                            }
                        }
                    }
                }
            }
        }
    }
}

Finally, let's apply our apply-group at the [security policies] stanza within our configuration:

root@ce-1# set security policies apply-groups default-log

Now that we've completed the configuration, let's examine the results of the application of our apply-group by taking a look at our security policies, this time by displaying the inherited configuration:

root@ce-1# show security policies | display inheritance
apply-groups default-log
from-zone Trust to-zone Untrust {
    policy allow-outbound {
        match {
            source-address Users-subnet;
            destination-address any;
            application any;
        }
        then {
            permit;
        }
    }
    ##
    ## 'log-all-else' was inherited from group 'default-log'
    ##
    policy log-all-else {
        ##
        ## 'match' was inherited from group 'default-log'
        ##
        match {
            ##
            ## 'any' was inherited from group 'default-log'
            ##
            source-address any;
            ##
            ## 'any' was inherited from group 'default-log'
            ##
            destination-address any;
            ##
            ## 'any' was inherited from group 'default-log'
            ## Warning: application or application-set must be defined
            ##
            application any;
        }
        ##
        ## 'then' was inherited from group 'default-log'
        ##
        then {
            ##
            ## 'deny' was inherited from group 'default-log'
            ##
            deny;
            ##
            ## 'log' was inherited from group 'default-log'
            ##
            log {
                ##
                ## 'session-init' was inherited from group 'default-log'
                ##
                session-init;
            }
        }
    }
}
from-zone Untrust to-zone Trust {
    policy allow-web {
        match {
            source-address any;
            destination-address web-server;
            application junos-http;
        }
        then {
            permit;
        }
    }
    ##
    ## 'log-all-else' was inherited from group 'default-log'
    ##
    policy log-all-else {
        ##
        ## 'match' was inherited from group 'default-log'
        ##
        match {
            ##
            ## 'any' was inherited from group 'default-log'
            ##
            source-address any;
            ##
            ## 'any' was inherited from group 'default-log'
            ##
            destination-address any;
            ##
            ## 'any' was inherited from group 'default-log'
            ## Warning: application or application-set must be defined
            ##
            application any;
        }
        ##
        ## 'then' was inherited from group 'default-log'
        ##
        then {
            ##
            ## 'deny' was inherited from group 'default-log'
            ##
            deny;
            ##
            ## 'log' was inherited from group 'default-log'
            ##
            log {
                ##
                ## 'session-init' was inherited from group 'default-log'
                ##
                session-init;
            }
        }
    }
}

Once again, with just a couple of lines of code we can streamline the configuration to a large extent, in this case creating an explicitly defined deny policy which logs all traffic that would otherwise be silently discarded.  And best of all, we can do so without having to resort to manual configuration of each and every one.

In small installations this technique might be of little benefit, but in larger implementations consisting of dozens of zones with a combination of Interzone and Intrazone and bidirectional security policies, the benefit of such an approach cannot be understated.  Not only will this ease configuration burden, but it will ensure that all traffic which doesn't match any of the existing security policies will be handled in a consistent manner.  Of course, as with previous examples, if there are certain policies that we don't want to inherit this new default-deny, we can simply utilize the apply-group-except statement for each of those respective policies.

In our next article we will examine changing the built-in Junos application defaults so that we can customize timers and other parameters.

Users-subnet

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

29Jun/118

Juniper SRX Tips :: Uniform Security Policy Modification

Often there are instances where we want to affect all security policies configured on an SRX device.  For example, let's say that we have thousands of policies configured on our firewall, and we want to enable logging for every single policy.  Obviously this would take some time if we were to do this manually on each and every individual policy, so an easier way is desired.

In ScreenOS we have the concept of a Global zone which acts as a container encompassing all zones, but to date, Junos does not support a similar functionality on the SRX. Furthermore, the Global zone doesn't affect existing policies but rather is way to apply a consistent policy to all Inter-zone and Intra-zone traffic that doesn't match any of the existing policies.

However, despite all of this, there is in fact a methodology we can use to uniformly modify all of the existing security policies on our box, in a manner that is actually much more powerful than what is accomplished in ScreenOS with the Global zone.

Let's take a look.  First, let's say we have some policies that we would like to enable logging on:

root@ce-1# show security policies
from-zone Trust to-zone Untrust {
    policy allow-outbound {
        match {
            source-address any;
            destination-address any;
            application any;
        }
        then {
            permit;
        }
    }
}
from-zone Untrust to-zone Trust {
    policy allow-web {
        match {
            source-address any;
            destination-address web-server;
            application junos-http;
        }
        then {
            permit;
        }
    }
}

Here you can see we have a policy allowing all traffic outbound from Trust to Untrust, and another policy allowing inbound HTTP traffic from the Untrust zone towards the Web Server in the Trust zone.  Now, let's enable logging for all of our policies by using an apply-group and matching on all policies from any zone to any other zone.  Note that this will encompass both Inter-zone as well as Intra-zone traffic:

groups {
    global-logging {
        security {
            policies {
                from-zone <*> to-zone <*> {
                    policy <*> {
                        then {
                            log {
                                session-init;
                            }
                        }
                    }
                }
            }
        }
    }
}

Finally, let's apply our apply-group at the [security policies] stanza within our configuration:

root@ce-1# set security policies apply-groups global-logging

Now that we've completed the configuration, let's examine the results of the application of our apply-group by taking a look at our security policies, this time by displaying the inherited configuration:

root@ce-1# show security policies | display inheritance
apply-groups global-logging
from-zone Trust to-zone Untrust {
    policy allow-outbound {
        match {
            source-address any;
            destination-address any;
            application any;
        }
        then {
            permit;
            ##
            ## 'log' was inherited from group 'global-logging'
            ##
            log {
                ##
                ## 'session-init' was inherited from group 'global-logging'
                ##
                session-init;
            }
        }
    }
}
from-zone Untrust to-zone Trust {
    policy allow-web {
        match {
            source-address any;
            destination-address web-server;
            application junos-http;
        }
        then {
            permit;
            ##
            ## 'log' was inherited from group 'global-logging'
            ##
            log {
                ##
                ## 'session-init' was inherited from group 'global-logging'
                ##
                session-init;
            }
        }
    }
}

As you can see, with a couple of lines of code we can alter all of the existing policies on our device without having to resort to manual configuration of each and every one. This type of functionality is perfect when we want to have a singular set of configuration elements apply to all of our policies uniformly.  On the other hand, if there are certain policies that we don't want to inherit these settings, we can simply utilize the apply-group-except statement for each of those respective policies.

In our next article we will examine how to change the default-deny behavior on the SRX to also including logging of denied packets.

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

21Jun/1112

JNCIE Tips from the Field :: Summarization Made Easy

Today we'll start with a series of articles covering tips and techniques that might be utilized by JNCIE candidates, whether pursuing the JNCIE-SP, JNCIE-ENT, or even the JNCIE-SEC.  The tips and techniques I will be covering might prove to be useful during a lab attempt but could also be used in real-world scenarios to save time and minimize configuration burden in addition to eliminating mistakes that might otherwise be made.  I want everyone to understand that what I am about to write is simply a technique.  I am not divulging any materials or topics which are covered under NDA.

NOTE: For full disclosure, I must reveal that I am an employee of Juniper Networks in their Education Services department.  As such, I take the responsibility of protecting the content and integrity of the exam as well as the certification credentials very seriously.  I would never reveal anything which would allow a candidate to have in-depth knowledge of any specific topics or questions that may appear on the exam.  Not only that, I worked REALLY, REALLY hard to achieve my JNCIE certifications, and I believe everyone else should too! It's certainly more rewarding that way too don't you think?!

So without further delay, let's take a look at today's technique.

It is well known that sumarization is a key aspect of any type of practical exam involving routing of some sort.  Those who have ever taken a CCIE Routing & Switching or CCIE Service Provider exam can attest, summarization is one thing every expert level candidate needs to master.  It is no different with Juniper.  In fact, Juniper's certification web page clearly lists as one of the JNCIE-ENT exam objectives the requrement to "Filter/summarize specific routes". 

What I will show you next is a technique which I find quite handy when attempting to determine the best summary for a given route, and you can do so without having to resort to pen and paper and figuring it out the old fashioned way, i.e. looking at prefixes in binary. This technique, rather, allows you to use the power of Junos to your advantage to perform these tasks.  What I will reveal will also show you a fundamental difference between IOS and Junos and highlights why I believe Junos to be a more flexible, powerful, and superior network operating system.  You simply can't do what I am about to do on a Cisco platform running IOS.

So let's start off by looking at a diagram.  Let's say we have a network that has several OSPF areas, and we must summarize some information for each respective area towards the backbone without encompassing routing information that might exist outside of that area.

ospf-summarization-example

Here we can see we have a backbone area, consisting of two routers, P1 and P2.  P1 is acting as an ABR for Area 1 and is connected to both R1 and R2. P2 is acting as an ABR for Area 2 and is connected to R3.  As you can see from the diagram, I have configured more than a single set of IP addresses on many of the physical interfaces as well as the loopbacks.  This way I can represent many more networks and therefore create multiple Network LSAs for purposes of summarization.

So let's assume that we need to create the largest aggregate possible for a particular area and advertise only that aggregate towards the core without encompassing any routes which might be outside the area from which the summary describes.  Now normally, one would take a look at the diagram, get out a pen and paper, and start a lengthy exercise of supernetting based on binary addresses.  This can take several minutes or more and is valuable time that could certainly be used on wide variety of other more important tasks like setting up MPLS LSPs or troubleshooting that Layer 2 VPN connectivity.  So let's take a look at a simple trick that actually takes advantage of Junos to determine what the summary should be.

What we are going to do is take advantage of a feaure inside Junos which automatically shows us a range of prefixes which match a given CIDR block.  The Junos operating system has built-in route matching functionality which allows us to specify a given CIDR block and returns all routes with a mask length equal to or greater than that which is specified.  So by applying this principle, what we want to do is look at the diagram for a particular area, choose the lowest IP within that area as our base, and then apply a subnet mask to it which attempts to encompass that route as well as others. 

For example, looking at this diagram, we see that the lowest IP address being used in Area 1 is the 168.10.32.1 address assigned to R1's loopback.  So let's start by using this as our base for our summary, and then simply apply a subnet mask to it which we think might encompass additional routes:

sfouant@p1# run show route 168.10.32.0/22   
inet.0: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
168.10.32.1/32     *[OSPF/10] 8w4d 19:04:48, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.2/32     *[OSPF/10] 8w4d 19:04:48, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.3/32     *[OSPF/10] 8w4d 19:04:43, metric 1
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
168.10.32.4/32     *[OSPF/10] 8w4d 19:04:43, metric 1
                    > to 168.10.48.10 via ge-0/0/1.0
                      to 168.10.60.10 via ge-0/0/1.0


Note: We can do this on any router within Area 1 since the Link-State Database is the same on all devices, but I prefer to perform the work on the ABR since this is where I will be performing the aggregation.  Also, the ABR may have other local and/or direct routes (or perhaps routes from other protocol sources) so we want to see things from the perspective of the ABR.

What we see here is that we have just now determined the summary route which in fact encompasses all the loopback addresses on both R1 as well as R2, but we need to keep going because this doesn't incorporate the Gigabit Ethernet links between all the devices:

sfouant@p1# run show route 168.10.32.0/21   
inet.0: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
168.10.32.1/32     *[OSPF/10] 8w4d 19:04:50, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.2/32     *[OSPF/10] 8w4d 19:04:50, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.3/32     *[OSPF/10] 8w4d 19:04:45, metric 1
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
168.10.32.4/32     *[OSPF/10] 8w4d 19:04:45, metric 1
                    > to 168.10.48.10 via ge-0/0/1.0
                      to 168.10.60.10 via ge-0/0/1.0


Not quite. Let's keep trying:

sfouant@p1# run show route 168.10.32.0/20   
inet.0: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
168.10.32.1/32     *[OSPF/10] 8w4d 19:04:55, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.2/32     *[OSPF/10] 8w4d 19:04:55, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.3/32     *[OSPF/10] 8w4d 19:04:50, metric 1
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
168.10.32.4/32     *[OSPF/10] 8w4d 19:04:50, metric 1
                    > to 168.10.48.10 via ge-0/0/1.0
                      to 168.10.60.10 via ge-0/0/1.0


Nope, still not there yet. Let's try again:

sfouant@p1# run show route 168.10.32.0/19   
inet.0: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
168.10.32.1/32     *[OSPF/10] 8w4d 19:04:58, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.2/32     *[OSPF/10] 8w4d 19:04:58, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.3/32     *[OSPF/10] 8w4d 19:04:53, metric 1
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
168.10.32.4/32     *[OSPF/10] 8w4d 19:04:53, metric 1
                    > to 168.10.48.10 via ge-0/0/1.0
                      to 168.10.60.10 via ge-0/0/1.0
168.10.48.4/30     *[OSPF/10] 00:36:26, metric 2
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
                      to 168.10.60.2 via ge-0/0/0.0
168.10.48.8/30     *[Direct/0] 8w4d 19:36:13
                    > via ge-0/0/1.0
168.10.48.9/32     *[Local/0] 8w4d 19:36:13
                      Local via ge-0/0/1.0
168.10.60.0/30     *[Direct/0] 8w4d 19:51:31
                    > via ge-0/0/0.0
168.10.60.1/32     *[Local/0] 8w4d 19:51:31
                      Local via ge-0/0/0.0
168.10.60.4/30     *[OSPF/10] 00:36:26, metric 2
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
                      to 168.10.60.2 via ge-0/0/0.0
168.10.60.8/30     *[Direct/0] 8w4d 19:36:13
                    > via ge-0/0/1.0
168.10.60.9/32     *[Local/0] 8w4d 19:36:13
                      Local via ge-0/0/1.0


Ok, this looks more like it.  Here we can see we have all the Gigabit Ethernet links connecting all devices, as well as the loopback addresses.  This might be a suitable summary.  Let's keep going to see what happens:

sfouant@p1# run show route 168.10.32.0/18   
inet.0: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
168.10.0.1/32      *[Direct/0] 8w4d 19:51:37
                    > via lo0.0
168.10.0.2/32      *[Direct/0] 8w4d 19:36:19
                    > via lo0.0
168.10.0.3/32      *[OSPF/10] 00:28:41, metric 1
                    > to 168.10.18.2 via fe-0/0/2.0
                      to 168.10.26.2 via fe-0/0/2.0
168.10.0.4/32      *[OSPF/10] 00:28:41, metric 1
                    > to 168.10.18.2 via fe-0/0/2.0
                      to 168.10.26.2 via fe-0/0/2.0
168.10.18.0/30     *[Direct/0] 8w4d 19:05:28
                    > via fe-0/0/2.0
168.10.18.1/32     *[Local/0] 8w4d 19:05:28
                      Local via fe-0/0/2.0
168.10.26.0/30     *[Direct/0] 8w4d 19:05:28
                    > via fe-0/0/2.0
168.10.26.1/32     *[Local/0] 8w4d 19:05:28
                      Local via fe-0/0/2.0
168.10.32.1/32     *[OSPF/10] 8w4d 19:05:04, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.2/32     *[OSPF/10] 8w4d 19:05:04, metric 1
                    > to 168.10.60.2 via ge-0/0/0.0
168.10.32.3/32     *[OSPF/10] 8w4d 19:04:59, metric 1
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
168.10.32.4/32     *[OSPF/10] 8w4d 19:04:59, metric 1
                    > to 168.10.48.10 via ge-0/0/1.0
                      to 168.10.60.10 via ge-0/0/1.0
168.10.48.4/30     *[OSPF/10] 00:36:32, metric 2
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
                      to 168.10.60.2 via ge-0/0/0.0
168.10.48.8/30     *[Direct/0] 8w4d 19:36:19
                    > via ge-0/0/1.0
168.10.48.9/32     *[Local/0] 8w4d 19:36:19
                      Local via ge-0/0/1.0
168.10.60.0/30     *[Direct/0] 8w4d 19:51:37
                    > via ge-0/0/0.0
168.10.60.1/32     *[Local/0] 8w4d 19:51:37
                      Local via ge-0/0/0.0
168.10.60.4/30     *[OSPF/10] 00:36:32, metric 2
                      to 168.10.48.10 via ge-0/0/1.0
                    > to 168.10.60.10 via ge-0/0/1.0
                      to 168.10.60.2 via ge-0/0/0.0
168.10.60.8/30     *[Direct/0] 8w4d 19:36:19
                    > via ge-0/0/1.0
168.10.60.9/32     *[Local/0] 8w4d 19:36:19
                      Local via ge-0/0/1.0


Clearly from this command, we can see we have now gone beyond what might be considered a suitable summary because we are now encompassing routes that exist within the backbone Area 0.  So it should be clear from this simple set of commands that the 168.10.32.0/19 would be a suitable address to use for our summary.

We could easily apply a similar example to Area 2 to quickly determine what the best summary would be.  We see from looking at the diagram the lowest IP within Area 2 is the 168.10.96.1 loopback address applied to R3.  When we use that as our base and go through the steps above, we can find our summary:

sfouant@p2# run show route 168.10.96.0/19   
inet.0: 27 destinations, 27 routes (27 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
168.10.96.1/32     *[OSPF/10] 01:00:57, metric 1
                      to 168.10.112.2 via fe-0/0/3.0
                    > to 168.10.118.2 via fe-0/0/3.0
168.10.96.2/32     *[OSPF/10] 01:00:57, metric 1
                    > to 168.10.112.2 via fe-0/0/3.0
                      to 168.10.118.2 via fe-0/0/3.0
168.10.112.0/30    *[Direct/0] 01:13:48
                    > via fe-0/0/3.0
168.10.112.1/32    *[Local/0] 01:13:48
                      Local via fe-0/0/3.0
168.10.118.0/30    *[Direct/0] 01:13:48
                    > via fe-0/0/3.0
168.10.118.1/32    *[Local/0] 01:13:48
                      Local via fe-0/0/3.0


And there you have it! As you can see it's really quite simple and if you haven't stumbled upon this already you may be saying to yourself, "Why didn't I think of that before?".  I hear from many candidates that they spend considerable time the old fashioned way to determine summaries and I always ask myself why.  As you can see, there is an easier way!

Clearly the benefit to using a technique such as the above is to easily find the routes that best summarize a bunch of more specific routes.  The utility of such an approach, while very useful during a practical exam, might be considerably lessened in the real-world where it is likely that hierarchy has already been built into the network and you have network diagrams at your disposal.  On the other hand, there may be situations where you inherit a network that was developed with hierarchy in mind, however summarization was never employed, or it was employed improperly.  In such cases, the above technique can be a real time saver, allowing you to spend less time doing binary math and more time doing the fun stuff - like troubleshooting why that MPLS LSP isn't getting established!

Stay tuned for additional articles covering time saving tips and techniques which can be used during your next lab attempt!  Good luck, and may the force be with you!

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

20Jun/110

Day One Guide: Junos Tips, Techniques, and Templates 2011

small-junos-tips-2011I am happy to announce that Juniper has just released a new Day One Guide entitled "Junos Tips, Techniques, and Templates 2011". For this particular Day One Guide, Juniper Networks Books and J-Net joined forces and requested the best and brightest Junos tips and techniques from the Junos user community.  In fact, the book was created after a thorough selection process which included reviewing over 300 submitted tips by over 100 individuals on the J-Net community boards at forums.juniper.net.

I am honored that Juniper accepted my contributions and decided to include them in this guide.  My contribution "Automatically Allow Configured BGP Peers in a Loopback Firewall Filter" covers how to configure a Junos prefix-list in conjunction with the apply-path features to parse a configuration and then dynamically build a list of matching prefixes for use in a firewall filter.

Outside of my meager contribution, this guide is chock full of dozens of useful tips and techniques and is an indispensable guide for anyone involved in managing Juniper platforms on a daily basis.

Junos Tips, Techniques, and Templates 2011 can be ordered on Amazon in hardcopy or Kindle edition, and is also available as a free download in PDF format. Enjoy!

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

11Apr/112

IETF Provides New Guidance on IPv6 End-Site Addressing

I've always been at odds with the recommendation in RFC 3177 towards allocating /48 IPv6 prefixes to end-sites.  To me this seemed rather short-sighted, akin to saying that 640K of memory should be enough for anybody.  It's essentially equivalent to giving out /12s in the IPv4 world which in this day and age might seem completely ridiculous, but let us not forget that in the early days of IPv4 it wasn't uncommon to get a /16 or even a /8 in some cases.

Granted, I know there are quite a few more usable bits in IPv6 than there are in IPv4, but allocating huge swaths of address space simply because it's there and we haven't thought of all the myriad ways it could be used in the future just seems outright wasteful.

So you can imagine my surprise and also my elation last week when the IETF published RFC 6177 entitled 'IPv6 Address Assignment to End Sites'.  In it, the general recommendation of allocating /48s to end-sites that has long been the defacto standard since the original publication of RFC 3177 in 2001 has finally been reversed.

It seems that sanity has finally prevailed and the IAB/IESG have decided to take a more pragmatic approach towards address allocation in IPv6.  The recommendations in RFC 6177 attempt to balance the conservation of IPv6 addresses while at the same time continuing to make it easy for IPv6 adopters to get the address space that they require without requiring complex renumbering and dealing with other scaling inefficiencies in the long term.  It is clear that acting too conservatively and allocating very small address spaces could act as a disincentive and possibly stifle widespread adoption of IPv6.

The new current recommendations for address allocations are as follows:

  • /48 in the general case, except for very large subscribers
  • /64 when it is known that one and only one subnet is needed by design
  • /128 when it is absolutely known that one and only one device is connecting

It goes on to state other recommendations and offers guidance to operators with regards to when to allocate certain prefix lengths.  But essentially, what this means is that now individual network operators have more options regarding which prefix size to allocate, and allows them to move away from strict general guidelines.  In essence, operators make the decision as to what prefix size to allocate based on an analysis of the needs of particular customers.

Perhaps this practical conservation may never be needed given the trillions of address space available in IPv6, but maybe, just maybe, in the very distant future if IPv6 is still in widespread use, it could very well be due to some of these recommendations being put in place today.  After all, 640K did turn out to be a rather small number didn't it?

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

1Apr/113

IPv4 Address Exhaustion Causing Harmful Effects on the Earth

Today, I received a very disturbing email on NANOG which was forwarded from a recipient on the Global Environment Watch (GEW) mailing list.  If this is true, we all need to take steps to make an orderly and smooth transition to IPv6 as quickly as possible, lest we suffer from the harmful effects described in this email.


From: Stephen H. Inden
To: Global Environment Watch (GEW) mailing list
Date: Fri, 1 Apr 2011 00:19:08 +0200
Subject: IPv4 Address Exhaustion Effects on the Earth

At a ceremony held on February 3, 2011 the Internet Assigned Numbers Authority (IANA) allocated the remaining last five /8s of IPv4 address space to the Regional Internet Registries (RIRs). With this action, the free pool of available IPv4 addresses was completely depleted.

Since then, several scientists have been studying the effects of this massive IPv4 usage (now at its peak) on the Earth.

While measuring electromagnetic fields emanating from the world's largest IPv4 Tier-1 backbones, NASA scientists calculated how the IPv4 exhaustion is affecting the Earth's rotation, length of day and planet's shape.

Dr. Ron F. Stevens, of NASA's Goddard Space Flight Center, said all packet switching based communications have some effect on the Earth's rotation. It's just they are usually barely noticeable. Until now.

"Every packet affects the Earth's rotation, from a small ping to a huge multi-terabyte download.  The problem with IPv4 is its variable length header and tiny address space that can cause an electromagnetic unbalance on transmission lines.  The widespread adoption of Network Address Translation (NAT) on IPv4 networks is making the problem even worse, since it concentrates the electromagnetic unbalance.  This problem is not noticeable with IPv6 because of its fixed header size and bigger 128 bits address space", Dr. Stevens said.

Over the past few years, Dr. Stevens has been measuring the IPv4 growing effects in changing the Earth's rotation in both length of day, as well as gravitational field.  When IPv4 allocation reached its peak, last February, he found out that the length of day decreased by 2.128 microseconds.  The electromagnetic unbalance is also affecting the Earth's shape -- the Earth's oblateness (flattening on the top and bulging at the Equator) is decreasing by a small amount every year because of the increasing IPv4 usage.

The researcher concluded that IPv4 usage has reached its peak and is causing harmful effects on the Earth:

"IPv4 is, indeed, harmful.  Not only 32 bits for its address space has proven too small and prone to inadequate solutions like NAT, it is now clear that its electromagnetic effects on the Earth are real and measurable."

The solution?

"I'm convinced that the only permanent solution is to adopt IPv6 as fast as we can", says Dr. Stevens.

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati

16Mar/112

Preparation Tips for the JNCIE-ER Exam

As many of you know, Juniper is currently undergoing a massive effort to update their certification program.  The previous track in 'Enterprise Routing' is now changing to 'Enterprise Routing and Switching' incorporating elements from the previous certification track in addition to some new elements essential to Enterprise switching such as Spanning-Tree, VLANs, Layer 2 Security, as well as High Availability features like Virtual Chassis.  We can expect that a lot of the topics like Firewalling and NAT will be removed from this exam as these topics will more properly appear in the Security track.

Although the new JNCIE-ENT certification is planned to be released in August 2011, there are many of you who are currently pursuing the existing JNCIE-ER before time runs out.  The good news is that Juniper plans to continue offering the existing JNCIE-ER exam until October 2011 so there is still quite a bit of time for those who are interested in attaining this certification.

There probably isn't a single day that goes by that I don't receive an email inquiry from someone currently pursuing the JNCIE-ER with a request to learn from my experiences and test preparation techniques.  And although this exam will only be available for another 7 months, I thought I'd write about my preparations and experiences with this exam so those candidates might benefit - not to mention it prevents me from having to keep repeating myself over and over again...

Building the Lab

For this particular exam, you are really going to need to get your hands on several J-Series routers, or at the very least some M/T/MX-Series routers with Adaptive Services capabilities (NOTE: This might require additional hardware on non J-Series devices, such as an Adaptive Services PIC or a Multi-Services PIC).  While it's possible to do a lot of the routing preparation with Olives, a good majority of the exam is on services such as Firewalling, NAT, and IPsec.  Without the right hardware, a candidate cannot properly prepare for these sections as performing these functions in an Olive is impossible.  Olives have no hardware PFE or the appropriate Services PICs or Modules, therefore there is no SP interface which is required to create interface-style and next-hop style service-sets.

If you happen to have a bunch of SSG 300-Series or SSG 500-Series ending in an M in your environment, you may be in luck.  These devices can be successfully converted to an equivalent J-Series box running Junos.  For example, an SSG 320M can be converted to a J2320, and an SSG 350M can be converted to a J2350.

The easiest way to do this is to boot the SSG platform from the USB flash drive which has been formatted with the Junos image.  An easy way to build a loadable Junos image onto a USB flash drive is to insert the USB flash drive into a working J-Series device and then perform the following function:

request system snapshot as-primary partition media compact-flash

This will copy all the appropriate system files and Junos image onto the flash drive and prepare it for booting on another device.

Once this has been done and the USB flash drive inserted into the SSG, the following commands can be issued to force the SSG to boot into Junos rather than ScreenOS:

set boot junos usb
save
reset

NOTE: The SSG 300M-series or SSG 500M-series device must be running ScreenOS version 6.1 or later in order for you to perform the conversion.  If your device is running an earlier ScreenOS version, you must first upgrade it to ScreenOS 6.1 or later.

A more thorough explanation of the upgrade process can be found here: Converting SSG 300M-series and SSG 500M-series Security Devices to J-series Services Routers with a USB Storage Device.

Exam Preparation Materials

In terms of exam study materials, here is what I used for the exam:

  • 'JUNOS Enterprise Routing' by Harry Reynolds and Doug Marschke. Read it twice if you can
  • 'Advanced Juniper Networks Routing in the Enterprise' courseware and labs which used to be available for free on the Juniper FastTrack site.  These are no longer available publicly, but can likely be found with a little digging.  I definitely recommend going through the labs because they are extremely representative of the types of things that you are likely to see on the exam.
  • 'Adaptive Services' chapter in the JUNOS 'Services Interfaces Configuration Guide' - its 500 pages but will definitely educate candidates on all the variants of Junos Services.
  • The 'JNCIP-M Study Guide' by Harry Reynolds is another really useful addition.  The labs in this book will really help with routing policy and configuration of OSPF, RIP, and BGP.
  • Probably the *single* most useful preparation tip I can give to anyone is to take the JNCIE-ER Bootcamp and/or the Remote Proctored lab exams offered by Proteus Networks.  I haven't personally taken the bootcamp, but I did see the materials from a colleague who sat through it and after sitting the exam I can tell you their Bootcamp is spot on.  On another note, I did take their remote proctored lab exams and once again I am not disappointed with my experience with them.  Rick Schenderlein was my proctor with Proteus and he really took the time to help me understand the areas that I could use improvement on.

As with all Expert level lab exams, a very important tip is to make sure you read the full exam in its entirety before starting a single configuration element.  This is truly an expert level exam - one which requires you to think through your design decisions.  There are often things later on in the exam which require you to go back and reconfigure something you've already set up in an previous section.  Reading ahead will allow you to save yourself some time when you've thought your design through fully in advance.

All in all, I didn't think the exam was that tough, but I also had 12+ years of experience working with Junos and a JNCIE-M certification prior to sitting the exam.  If you've already got the JNCIE-M, I think it's actually possible to prepare and pass this exam in just a few short months since there is considerable overlap between these two exams.  In my case, I actually finished the exam in a little over 5 hours and spent another 1-2 hours going over everything just to make sure I had it right. I've heard that most people going in are pretty much down to the wire with time so I'm not sure what happened in my case but simply attribute it to being over-prepared and having spent about a full year of non-stop preparations between the JNCIP-M, JNCIE-M, and the JNCIE-ER exams.  The trick here, as with preparation for anything, is to be consistent and develop a schedule which you can live with - a few hours a day over a span of several months will serve you infinitely better than studying hundreds of hours the few weeks before your exam.  Slow and steady wins the race here... you'll be surprised at how quick a few months can go by when you're motivated and committed to something!

I hope this helps those of you who are pursuing JNCIE-ER certifications, and I wish you the best of luck in your endeavors!

Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Google Buzz Send Gmail Post to LinkedIn Post to Slashdot Post to Technorati