How to Spot a Fake Facebook Account

Ever get a friend request from someone you don’t know and have never met before? More often than not, these accounts are created by criminals looking to harvest your personal information, or scam you in some other fashion.

It typically starts when you receive a friend request from someone you don’t know. And you have no mutual friends in common:

1-fake-facebook-account

A dead giveaway is looking at their Recently Added Friends. In this case, this person has accumulated a lot of new friends in a very short period of time. Notice they are all guys. Guys are more willing to accept a friend request from someone they don’t know, especially if it’s attached to a pretty face.

Also note that there are no mutual friends in common:

2-fake-facebook-account

Another tell tale sign is that all the pictures associated with the account have been added quite recently, in this case, in the last hour. This indicates this is a new account, not one that has been a legitimate account for a long time:

3-fake-facebook-account

NOTE: Pictures have been masked to protect the innocent. In this case, some unknowing girl has had her pictures harvested by the scammer for the purpose of running this endeavor.

If they don’t respond to an inquiry as to how you know them, that’s a dead giveaway:

4-fake-facebook-account

When faced with a potentially fake account, you can report them to Facebook’s Security Team which will review the account to determine its legitimacy. Report an account by clicking on the button next to the Message button and choosing Report. Then select the options to report the profile as a fake account:

5-fake-facebook-account

6-fake-facebook-account

7-fake-facebook-account

If all is successful, within a few days you should receive the following from the Facebook Security Team:
8-fake-facebook-account

In short, be smart, pay attention to the clues, and guard yourself. Be wary of accepting friend requests from someone you don’t know.

This has been a public service announcement.

The Changing Landscape of Selling in the Age of SDN

There are massive waves of technology upheaval taking place in the marketplace, causing disruption and providing a challenge to technology salespeople who are used to selling in the traditional ways. Cloud, Automation, Mobility, Adaptive Security and the Internet of Things are just a few of the major changes affecting the landscape right now. And while these technologies are certainly challenging in their own right, there is one technology that stands on it’s own, not only in terms of how technology decisions are made, but also how technology is bought.

That technology is Software Defined Networking (SDN). SDN is causing a fundamental shift in the way that technology is procured. There is a major shift away from buying point products and technologies which only meet a specific need and instead looking at the bigger picture with an aim of technology procurement fitting into a larger ecosystem that is providing broader solutions, enabling shorter ROI and better business agility.

Application-Centricity Creates New Stakeholders

The buying process used to be relatively straightforward, and different technology groups within an organization could procure technology within their own silo with little regard to how it fit within the broader ecosystem. Often times, the technology implemented would dictate and limit what applications could be run on it. Now the shift is towards an application-centric buying framework. The overarching driver is that the applications themselves are the driving force behind technology purchases, and these purchases must work in a tightly sophisticated, integrated ecosystem that consists of multiple compute, storage and networking resources. And more often than not, security is also being woven into that fabric as well. No longer can these be looked at as individual elements – furthermore, new tools that are emerging to orchestrate all these resources simultaneously are forcing the siloed buyers of the past to work cohesively with other teams and determine if technology meets the overarching requirements of various applications.

As a result, there are new stakeholders. The new stakeholders are the application architects, the cloud architects, and more commonly the DevOps teams that are responsible for overseeing the collaboration between developers and IT teams. These stakeholders will be responsible for ensuring that technology purchases are able to seamlessly integrate into a fully orchestrated system.

Adapting to the Changing Landscape

By better understanding the changing landscape of the buyer in the broader ecosystem, we must understand that we can no longer sell individual pieces of the puzzle. We must understand the whole puzzle itself and be able to sell the various solutions collectively which answer the needs of the buyers. Understanding that the whole is greater than the sum of all of its parts. Salespeople and technology professionals must be well versed in a diverse range of technologies in order to be able to speak to the technology buyer, with a goal of providing broader solutions that meet the “application-centric” nature that these buyers demand.

Furthermore, as solutions move towards a software-based model, there are implications which need to be understood as increasingly business will being conducted on a “subscription-based” model. This means that we need to understand how to maintain revenue streams in light of the fact that subscription based pricing models and the revenue associated with it are quite different compared to traditional revenue streams.

It also affects the channel in that various channels must come together to deliver solutions to their customers that integrate cohesively in a much larger framework. It’s no longer acceptable to simply deploy gear based on “speeds and feeds”. The channel needs to demonstrate that their solutions work cohesively in a larger ecosystem, and demonstrate that they have strong partnerships to meet these needs. These partnerships are going to be increasingly important as customers tie together components from multiple vendors and expect a seamless, integrated and highly orchestrated ecosystem.

In order to capitalize on this changing landscape and maintain revenue streams, a different approach needs to be taken with customers. First off will be identifying the new stakeholders. Once the stakeholders are identified, they will need to be approached with an overarching vision of their business problems and needs, with a focus on the broader solutions encompassing a diverse set of technologies that need to work cohesively. The successful technology salesperson of tomorrow will assist their customers by showing them fully orchestrated solutions that meet their needs while at the same time driving down costs, enabling them to be more agile and accomplish more in a shorter time.

Book Review :: Juniper QFX5100 Series: A Comprehensive Guide to Building Next-Generation Networks

Juniper QFX5100 Series: A Comprehensive Guide to Building Next-Generation Networks
by Douglas Richard Hanks, Jr.
Paperback: 310 pages
Publisher: O’Reilly Media
ISBN-13: 978-1491949573

5stars

Much more than just a book about the QFX5100

This was an easy weekend read, and quite honestly I’d never thought I’d say this about a technical book but I literally could not put the book down. Doug has amassed a wealth of great information, approaching the subject matter from a standpoint of brevity, applying the Goldilocks principle — not too much and not too little — but rather just the right amount of information.

Do not be confused by the title — this is not JUST a book about the QFX5100 series. As the subtitle might indicate, it’s more of a book on building next-gen networks, specifically Data Center networks, and serves as a fantastic primer on the various protocols and technologies that are becoming the mainstay of this world.

As the networking world works overtime to catch up to the virtualization offered by storage and compute resources, the reader tasked with creating the network of tomorrow will appreciate the coverage of building various types of fabrics of varying dimensions — whether it’s coverage of Juniper’s Virtual Chassis Fabric for building small to medium sized Ethernet Fabrics, or Clos Fabrics for building extremely large IP underlay networks, the coverage is top notch.  Readers will get a thorough introduction to the concepts of VXLAN and overlay networking with VTEPs using controllers such as Juniper’s Contrail or VMware’s NSX and their respective control plane signaling mechanisms such as EVPN and OVSDB.

I sincerely appreciated the in-depth coverage of the architecture of the QFX 5100 Series, the Broadcom Trident II chipset, as well as an inside look at the control plane virtualization that takes place on the QFX 5100 itself (apparently, Juniper is really taking virtualization to heart).  I also enjoyed the chapter on Performance and Scaling which covered the options for modifying latency throughout the box (cut-through vs. store-and-forward) as well as options for tailoring the Unified Forwarding Table to fit the needs of individual networks. The chapter on Network Automation is also a nice addition, with coverage of various automation tools such as PyEZ, Ansible, Puppet and Chef, just to name a few.

The astute reader familiar with Juniper’s website  will recognize that a few of the chapters comprising this book are borrowed from various white papers that Doug has authored – however, all in all, there is quite a bit more information in this book than can be gleaned  from resources on Juniper’s public facing collateral. There were a few minor grammatical and technical inconsistencies (largely text that didn’t match diagrams)… however this did not  detract from the overall value of the book and I can only ascribe this to the fact that Doug did not use me as a technical editor on this book. <hint hint>

Last but not least, although not specifically mentioned, I do believe this book as well his other QFX10000 book will prove to be invaluable resources for anyone preparing for the JNCDS-DC, JNCIP-DC, or the upcoming JNCIE-DC exams, as I strongly believe that technical content from all three exams will be likely be covered here.

All in all, an excellent resource, and one that I am sure to reference regularly in my day to day engagements working with customers building out next-generation data center networks. I thoroughly enjoyed reading this book and am looking forward to reading his recent book on the QFX10000 series.

 

Carrier Grade NAT and the DoS Consequences

Republished from Corero DDoS Blog:

The Internet has a very long history of utilizing mechanisms that may breathe new life into older technologies, stretching it out so that newer technologies may be delayed or obviated altogether. IPv4 addressing, and the well known depletion associated with it, is one such area that has seen a plethora of mechanisms employed in order to give it more shelf life.

In the early 90s, the IETF gave us Classless Inter-Domain Routing (CIDR), which dramatically slowed the growth of global Internet routing tables and delayed the inevitable IPv4 address depletion. Later came DHCP, another protocol which assisted via the use of short term allocation of addresses which would be given back to the provider’s pool after use. In 1996, the IETF was back at it again, creating RFC 1918 private addressing, so that networks could utilize private addresses that didn’t come from the global pool. Utilizing private address space gave network operators a much larger pool to use internally than would otherwise have been available if utilizing globally assigned address space — but if they wanted to connect to the global Internet, they needed something to translate those addresses. This is what necessitated the development of Network Address Translation (NAT).

NAT worked very well for many, many years, and slowed the address depletion a great deal. But in order to perform that translation, you still needed to aquire at least one globally addressable IP. As such, this only served to slow down depletion but not prevent it – carriers were still required to provide that globally addressable IP from their own address space. With the explosive growth of the Internet of Things, carriers likewise began to run out of address space to allocate.

NAT came to the rescue again. Carriers took notice of the success of NAT in enterprise environments and wanted to do this within their own networks, after all, if it worked for customers it should likewise work for the carriers. This prompted the IETF to develop Carrier Grade NAT (CGN), also known as Large Scale NAT (LSN). CGN aims to provide a similar solution for carriers by obviating the need for allocating publicly available address space to their customers. By deploying CGN, carriers could oversubscribe their pool of global IPv4 addresses while still providing for seamless connectivity, i.e. no truck-roll.

So while the world is spared from address depletion yet again, the use of CGN technologies opens a new can of worms for carriers. No longer does one globally routable IP represent a single enterprise or customer – due to the huge oversubscription which is afforded through CGN, an IP can service potentially thousands of customers.

This brings us to the cross-roads of the Denial of Service (DoS) problem. In the past, when a single global IP represented only one customer network, there was typically no collateral damage to other customer networks. If the DoS was large enough to impact the carrier’s network or if there was collateral damage, they would simply blackhole that customer IP to prevent it from transiting their network. However, with CGN deployments, and potentially thousands of customers being represented by a single IP, blackhole routing is no longer an option.

CGN deployments are vulnerable to DoS in a few different ways. The main issue with CGN is that it must maintain a stateful record of the translations between external addresses and ports with internal addresses and ports. A device which has to maintain these stateful tables is vulnerable to any type of DoS activity that may exhaust the stateful resources. As such, a CGN device may be impacted in both the inbound and the outbound direction. An outbound attack is usually the result of malware on a customers machine, sending a large amount of traffic towards the Internet and consuming the state tables in the CGN. Inbound attacks usually target a particular customer, and take the form of a DoS attack, or a Distributed Denial of Service (DDoS) attack. Regardless of the direction of the attack, a large amount of resources are consumed in the CGN state table, which reduces overall port availability. Left unregulated, these attacks can easily cause impact not only to the intended victim, but potentially the thousands of other customers being serviced by that CGN.

With the inability to simply blackhole a given IP using edge Access Control Lists (ACLs), carriers must look at other options for protecting their customer base. While some CGN implementations have the ability to limit the amount of ports that are allocated to a single customer, these only work in discrete cases and can be difficult to manage. They also do not protect customers if the CGN device is itself the target of the attack.

The solution to this problem is the use of a purpose-built DDoS mitigation device, or what is more commonly referred to as a “scrubbing” device in IT circles. Dedicated DDoS mitigation devices attempt to enforce that everyone plays nicely, by limiting the maximum number of sessions to or from a given customer. This is done by thorough analysis of the traffic in flight and rate-limiting or filtering traffic through sophisticated mitigation mechanisms to ensure fairness of the public IP and port availability across all customers. Through the use of dedicated DDoS mitigation devices, CGN devices and their associated customers are protected from service disruptions, while still ensuring legitimate traffic is allowed unencumbered. Lastly, another important aspect of DDoS mitigation technology is that they tend to be “bumps in a wire”, that is to say, they don’t have an IP address assigned to them and as such cannot be the target of an attack.

Is DDoS Mitigation as-a-Service Becoming a Defacto Offering for Providers?

Republished from Corero DDoS Blog:

It’s well known in the industry that DDoS attacks are becoming more frequent and increasingly debilitating, turning DDoS mitigation into a mission critical initiative. From the largest of carriers to small and mid-level enterprises, more and more Internet connected businesses are becoming a target of DDoS attacks. What was once a problem that only a select few dealt with is now becoming a regularly occurring burden faced by network operators.

In my daily engagements with various customers of all shapes and sizes, it’s truly interesting to see how the approach to DDoS mitigation is changing. Much of this is the result of DDoS mitigation services shifting from a “nice to have” technology to a “must-have”, essential in order to maintain business continuity and availability.

When I built DDoS mitigation and detection services for Verizon back in 2004, the intent was to offer value-add revenue producing services to offer subscribers, in an effort to build out our security offerings. For many years, this concept was one that pretty much every provider I worked with was looking into; build a service with the intent of generating new revenue opportunity from customers when traditional avenues such as simple connectivity and bandwidth offerings were contracting.

However, in the past several months, as I interact with large scale carriers to data center hosting providers, I am seeing a common thread starting to emerge – that is, attracting new customers and retaining existing ones is becoming more difficult in the absence of differentiated value. Compounding this issue is that the customers are starting to expect some of these services as part of their connectivity fees.  What I’m seeing is more and more providers investigating the option of offering DDoS mitigation services to their customers as a virtue of being connected to them, in an effort to attract them away from other providers who have limited service offerings and capabilities.

Could it be that DDoS mitigation services become a standard offering on a provider’s network? Is it feasible that at some point in the future DDoS mitigation will become an inherent capability provided by the service providers?

In order for this approach to become a reality, the economics of the game have to change. Inserting DDoS mitigation elements into the network need to be reasonably inexpensive in order for carriers and hosting providers to justify the cost. The technology also needs to be simple and as close to automatic as possible, as an inherent service offering will not justify the huge expense and uplift of having a team of operations personnel managing the service. Attacks need to be mitigated dynamically and quickly and without the need for manual intervention or the requirement to have to pick up a phone to get assistance. And lastly, whatever mechanisms are in place need to ensure a “do no harm” approach and that there is no collateral damage to good traffic.

At Corero, we believe that we are doing just that; changing not only the economics of the game, but also by fundamentally looking at the problem is a different way. Corero enables real-time, algorithmic identification of network anomalies and subsequent mitigation of the attack traffic, eliminating the DDoS challenge before attacks transit the network and ultimately impact downstream customers.

This concept is realized through, dynamic mitigation bandwidth licensing – a new economic model which represents the use of high scalable DDoS mitigation technology. The ability to modernize DDoS protection, specifically taking advantage of always-on DDoS mitigation through emerging and proven deployment models; such as dedicated in-line deployment of appliance based DDoS mitigation at the peering and transit points is becoming a more common practice with the help of Corero Network Security.

Juniper Networks Announces New Network Design Training Curriculum and Certification Program

Juniper took a big step forward in rounding out their certification programs by announcing a new Design Training and Certification curriculum, focusing on best practices and techniques that can be used across the spectrum of network architecture and design. Slated to be included in this program are also technologies around software-defined networking (SDN) and network functions virtualization (NFV).

This is a huge step forward for Juniper’s training and certification program and will round out their education portfolio with something similar to Cisco’s design certification. Furthermore with the advent of network automation, and SDN and NFV technologies becoming more commonplace, the benefits from such a training and certification curriculum can’t be overstated.

The design curriculum will eventually include a portfolio of training offerings, starting with the first course which is available now, the Juniper Networks Design Fundamentals course. These courses and their corresponding design certifications will focus on the latest techniques, resources and various tools that companies can use to fully design, secure, and automate their networks. Trainings will range all the way from design fundamentals on through to more advanced courses covering the design specific requirements of Data Center and WAN networks. The first certification, Juniper Networks Certified Design Associate (JNCDA) is available for registration now, and will eventually be followed by certifications at the Specialist level (JNCDS) and the Professional level (JNCDP).

This looks to be a very exciting offering indeed and should help those interested in Juniper technologies keep pace with the myriad new changes that are taking place in the networking world, and assist them in making proper design choices. I thoroughly look forward to analyzing these materials and providing an update to the community on these materials once I’ve had an opportunity to take a look at them.

What’s a Steiner Tree?

Any of you who have worked with VPLS or NG-MVPNs are likely already familiar with using Point-to-Multipoint (P2MP) LSPs to get traffic from a single ingress PE to multiple egress PEs.  The reason that P2MP LSPs are desired in these cases is that it can reduce unnecessary replication by doing so only where absolutely required, for example where a given P2MP LSP must diverge in order to reach two different PEs.

However, typically the sub-LSPs which are part of a given P2MP LSP traverse the shortest-path from ingress to egress based on whatever user defined constraints have been configured.  While this is fine for many applications, additional optimizations might be required such that additional bandwidth savings can be realized.

We will take a look at something called a Steiner-Tree which can help the network operator to realize these additional savings, when warranted, reducing the overall bandwidth used in the network and fundamentally changing the way in which paths are computed.

Let’s start by taking a look at a simple example in which RSVP is used to signal a particular P2MP LSP, but no constraints are defined.  All the links in this network have a metric of 10.  In this case, the sub-LSPs will simply traverse along the shortest path in the network, as can be seen in the diagram below.

Here we see a P2MP LSP where PE1 is the ingress PE and PE2, PE3, and PE4 are all egress nodes.  Since no constraints have been defined the calculated ERO for each of the sub-LSPs will follow along the shortest path where we can see one sub-LSP taking the PE-P1-P2-PE2 path, another is taking the PE1-P1-P3-PE3 path, and the third is taking the PE1-P1-P4-PE4 path.  In this case, each sub-LSP has a total end-to-end cost of 30.

Shortest Tree

Under many circumstances this type of tree would be perfectly acceptable, especially when the end-goal is the minimize end-to-end latency, however there are other cases where we may want to introduce additional hops in an effort to reduce overall bandwidth utilization.  This is where the concept of a minimum-cost tree, otherwise known as a Steiner Tree, comes into play.

This may seem counter-intuitive at first; after all, doesn’t a shortest-path tree attempt to minimize costs?  The answer is yes, but it usually only does so by looking at costs in terms of end-to-end metrics or hops through a network.  Once you understand the mechanics of the Steiner Tree algorithm, and how it attempts to minimize the total number of interconnects, it starts to make more sense.

According to Wikipedia, “the Steiner tree problem, or the minimum Steiner tree problem, named after Jakob Steiner, is a problem in combinatorial optimization, which may be formulated in a number of settings, with the common part being that it is required to find the shortest interconnect for a given set of objects”.

That’s a pretty fancy way of saying it’s attempting to optimize the path to be the shortest path possible while at the same time reducing the total number of interconnects between all devices to only those that are absolutely required.

Steiner Tree optimizations are very useful where an ingress PE must send large amounts of data to multiple PEs and it is preferable to ensure that overall bandwidth utilization is reduced, perhaps because of usage-based billing scenarios which require that overall circuit utilization be reduced as much as possible in order to save money.

Let’s take a look at an example, once again using the same network as before, but this time performing a Steiner Tree optimization whereby cost is measured in terms of overall bandwidth utilization.  In this case we still see that we have the requirement to build the P2MP LSP from PE1 to PE2, PE3, and PE4.  However, this time we are going to compute an ERO such that replication will only take place where absolutely necessary in order to reduce the total number of interconnects and hence overall bandwidth utilization.

After performing a Steiner Tree path computation, we determine that PE3 is a more logical choice to perform the replication to PE2 and PE4, even though it increases the overall end-to-end metric cost to 40.  The reason for this is we have now effectively eliminated the bandwidth utilization on the P1-P2, P2-PE2, P1-P4, and P4-PE4 links.  In effect, we’ve gone from utilizing bandwidth across seven links to only five.  If the P2MP LSP was servicing a 100 Mbps video stream, we have just effectively reduced overall bandwidth utilization on the network as a whole by 200 Mbps.

Steiner Tree

One of the interesting side-effects of this approach is that we now see that PE3 is not only an egress node, but it is now also a transit node as well (for the sub-LSPs terminating at PE2 and PE4).  Due to this, we’ll also see that in these types of scenarios the Penultimate Hop Popping (PHP) behavior is different on P3 in that we don’t want it popping the outer label before sending frames to PE3 since PE3 may need to accommodate labeled packets heading to PE2 or PE3.  We will cover some of this in a subsequent article on the signaling mechanisms inherent in P2MP LSPs and some of the changes to the behavior in MPLS forwarding state.

Path computation for P2MP LSPs can be complex, especially when the goal is create Steiner Trees.  The reason for this added complexity when computing Steiner Trees is that sub-LSP placement has a direct correlation with other sub-LSPs, which is contrary to what happens when shortest-path trees are calculated where each sub-LSP may be signaled along their own unique path without regard to the placement of other sub-LSPs.

As with traditional LSPs, similar methods of determining the paths through the network and hence the ERO can be used, i.e. manual, offline computation.

The easiest approach would be to use constructs like Link Coloring (Affinity Groups for you Cisco wonks) to influence path selection, for example, by coloring the PE1-P1, P1-P3, P3-PE3, PE3-PE2, and PE3-PE4 links with an included color, or coloring the remaining links with a different color and excluding that color from the LSP configuration.

However, this approach is merely a trick.  We are feeding elements into the CSPF algorithm such that the shortest path which is calculated essentially mimics that of a Steiner Tree.  In other words, it’s not a true Steiner Tree calculation because the goal was not to reduce the total number of interconnects, but rather to only utilize links of an included color.

Furthermore, such an approach doesn’t easily accommodate failure scenarios in which PE3 may go down, because even though Fast Reroute or Link/Node Protection may be desired, if the remaining links do not have the included colors they may be unable to compute an ERO for signaling.

Workarounds to this approach are to configure your Fast Reroute Detours or your Link/Node Protection Bypass LSPs to have more relaxed constraints, such that any potential path might be used.  However, more commonly what you’ll see is that some type of additional computations might be performed using traditional offline approaches (using modeling tools such as those provided by vendors such as WANDL, OPNET, or Cariden) which factors both steady-state as well as failure scenarios to assist the operator in determining optimal placement of all elements.

An interesting side-note is that there are some pretty significant developments underway whereby online computation can be performed in such a way as to optimize all P2MP LSPs network-wide, using something known as Path Computation Elements (PCEs).  These are essentially any entity which is capable of performing path computation for any set of paths throughout a network by applying various constraints.  It is something that looks to be especially useful in large carrier networks consisting of many LSPs, and especially so in the case of Steiner Tree P2MP LSPs where the sub-LSP placement is highly dependent on others.  See the charter of the PCE Working Group in the IETF for more information on this and other related developments.

As a side note, it should be fairly evident that in order to perform path optimizations on anything other than shortest-path trees (i.e. Steiner Trees or any other type of tree based on user-defined constraints), RSVP signaling must be used in order to signal a path along the computed ERO.  LDP certainly can be used to build P2MP LSPs (aka mLDP), however much like traditional LSPs built via LDP, the path follows the traditional IGP path.

Stay tuned as we will cover more exciting articles on P2MP LSPs and some of the other underpinnings behind many of the next generation MPLS services being commonly deployed…

Preparation Tips for the JNCIE-SEC Exam

Not a day that goes by since having passed the JNCIE-SEC exam that I don’t receive an inquiry in one form or another regarding how I prepared for the exam.  It seems that there is an incredible amount of interest in this exam, especially from all those die-hard ScreenOS folks that are now converting to Junos.  So instead of constantly repeating myself, I figured I’d just put it up on the blog so others can benefit (leaving me more time to do other things, ‘heh).

NOTE: For full disclosure, I must reveal that I am an Technical Trainer and Certification Proctor at Juniper Networks.  As such, I take EXTRA responsibility towards protecting the content and integrity of the exam and I take the certification credentials very seriously.  Not only that, I worked REALLY, REALLY hard to achieve my JNCIE certifications, and I believe everyone else should too! As such, I kindly ask that candidates refrain from asking me questions which would be considered a violation of the NDA.  Also, I should add that although I work for Juniper, the viewpoints expressed in this article are my own and may not necessarily be shared by my employer.

Let’s first start by looking at the exam objectives and then we will move on to the materials I used for preparation and the hardware requirements for building out a lab which would provide for sufficient preparation.

Exam Objectives

Detailed exam objectives are listed on Juniper’s JNCIE-SEC Exam Objectives certification page.  Familiarize yourself with these objectives and try to focus your study towards mastering all of these objectives.  Learn to read between the lines to identify if additional subject matter might need to be explored for full preparation.

In Junos there are typically more than one way to accomplish a given task so you would be wise to learn all the different ways of accomplishing a goal to achieve complete mastery of the subject matter.  For example, can you accomplish bidirectional address translation similar to Static NAT by instead using Source NAT and Destination NAT?  What are the benefits and caveats of each approach?

The current Junos software release that is used throughout the exam is Junos 11.1.  A quick glance through the release notes may be useful to familiarize yourself with some of the new features introduced in this version.

Study Materials

First and foremost, you are going to want to get your hands on the official Juniper courseware for all the requisite curriculum listed under the Junos Security track.  Specifically the following:

If you are unable to attend all of these courses in person, one of the cool things is that Juniper now lets you purchase the course materials for self-study purposes.  Basically you get access to everything that you would normally receive in the class, minus the instructor and access to the lab gear of course.

NOTE: While it is possible to order the materials for self-study, I strongly advocate taking the actual training if you can do so as the instructors tend to augment the subject matter with additional details, first-hand observations and experience not normally found in the materials.  Furthermore, as is the case in classes I normally teach, we tend to reveal tips and techniques which might be useful in certification attempts.

To augment the above, I would highly advise reading the book ‘Junos Security‘ by Rob Cameron, Brad Woodberg, Patricio Giecco, Tim Eberhard, and James Quinn.  I’ll be writing a review of this book in a subsequent post but for now I can’t overemphasize how important this book was in my preparations. In fact, I would advise reading it twice for good measure.  There is a lot of good coverage in this book.  The majority of what you can expect to see in the exam is covered in this book, and what might be missing is adequately covered in the official courseware material.

I would also suggest making note of the links below.  You would be well advised to make use of both of these links during your preparation.  The first link is the JumpStation to a wide variety of SRX knowledge base articles and the second link provides detailed coverage on configuring High Availability across a number of different SRX platforms.  Familiarize yourself with the subtle differences in HA configuration across all the different platforms as you don’t want your first time to be exposed to these differences to be during an examination attempt.

Before moving on to the lab setup, I want to mention that we will be offering JNCIE-SEC bootcamps sometime in the future.  Although there is currently no committed date for such an offering, when available you will get in-depth coverage of the types of topics you will expect to see on the exam in addition to a simulated lab on the final day of class.  Stay tuned for more information regarding our bootcamp offerings on Juniper’s Learning Portal.

Lab Buildout

A common question asked throughout the forums is what type of lab setup is required for adequate preparation.  I can tell you that I personally prepared with only two SRX210s and single SRX100 device, but it slowed down my preparations immensely due to constantly having to rearrange and reconfigure the lab setup to accommodate different topologies (hub-and-spoke vs. full-mesh, clustered vs. non-clustered, etc.).  If you can spring for it, I would say purchase as many devices as you possibly can so you can build out a clustered SRX while leaving others as standalone and build complex VPN topologies.  This way you can spend more of your time learning new features rather than having to rearrange your lab setup.

amtrak_labOne of the benefits of having the smaller branch devices is that they are fairly portable.  In fact, as seen in the picture to the left, I was able to set up my lab during a trip from DC to New York on an Amtrak train in business class (although others did give me funny looks).  As you can see, even during a 3 hour trip, I was able to make use of this time for study preparations.

I would also strongly advise purchasing at least one device with the High Memory option as this will let you run the full gamut of IPS and UTM capabilities, assuming you’ve got the licenses.  Speaking of licenses, you can acquire trial licenses from Juniper which are valid for a period of 4 weeks, so I would advise holding off on activating these until you are completely ready.  Trial licenses are tied to a devices serial number, and although they are only valid for a period of 4 weeks, you can fetch a trial license once per year for each device serial number.

You can find SRX devices on eBay for as little as a few hundred dollars a piece, so building out a lab doesn’t have to break the bank.  And the cool thing is that when you are done you can resell them for a fair market value so in the long term you really shouldn’t have to spend that much getting a decent lab built out.

Once you have your lab completely set up, I would strongly advise going through all the labs in the official courseware as these are indicative of the types of things you will likely see on the exam.  Unlike JNCIE-ENT and JNCIE-SP, in this lab it really helps to have incorporated some type of client and server throughout the topology so that various features such as NAT and Stateful Firewall Policy can be properly tested.  In lieu of this, and with a bit of creative license, you could actually use one of your SRX platforms with a few Virtual Routers configured to simulate both clients and servers, connected to the Trust ports on the other devices throughout your topology.  This won’t give you the same parity as having access to real Clients and Servers, but the idea is to be able to generate sufficient flows to properly trigger things like NAT rules or firewall policy.  A lot can be simulated by simply using ‘telnet’ and specifying the destination-port required to trigger a particular rule on a downstream device.

Final Notes

A question most often asked is how long should it take to prepare.  The answer to that question really depends on your Junos experience level and background.  If you already have previous working experience with Junos or a JNCIE, I would expect about 4-6 months should be sufficient for adequate preparation.  Otherwise if you are new to Junos or transitioning over from ScreenOS, I wouldn’t even suggest starting exam preparations until you’ve had at least 1-2 years experience working with Junos and the SRX platforms.

Overall, this might seem like a long time but you’d be amazed at how quickly a few months can go by – if you can carve out even just an hour each day over the course of several months you will be infinitely better served than having to do a bunch of cramming in the last few weeks before your exam.  Remember, slow and steady wins the race here… it’s a marathon, not a 100-meter dash.

Last but not least, and this may seem a bit silly but it is really important to try to get to bed early on the night of the exam and get a decent nights rest.  If you’re not adequately prepared the night before the exam, cramming all night isn’t going to do you any good.  Also, wake up early enough to ensure you can get a good breakfast.  Based on personal experience I can tell you that this makes a big difference.  I strongly advise oatmeal since it’s low on the Glycemic Index and will give you a slow steady release of energy throughout the morning – the perfect way to ensure your mind is focused and you don’t have any of those mid-morning dips in energy levels or mental acuity.

A little tidbit that not many folks are aware of – you can bring your own keyboards when you sit the exam as you might find the keyboards we provide to be difficult to use.  This is one of those little things that can really make a difference when you are used to running all those EMACS command sequences on a keyboard you are familiar with.

I will be proctoring this exam so for those of you attempting to sit the exam in our Herndon office, I look forward to meeting you and wish you the best in your upcoming attempt.  With a little bit of luck and a lot of preparation, you may find success and achieve the highly sought-after JNCIE-SEC designation.  Good luck and may the force be with you!!!

Black Hat OSPF Vulnerabilities: Much Ado About Nothing

Imagine a group of researchers planning to speak at a conference regarding a previously undiscovered vulnerability present in most homes that would allow a thief to rob your home of its valuables with complete ease.  You would probably be interested in hearing what they had to say so you could take the necessary precautions to protect your home.

Now imagine when they presented their findings, they went on to state that it was incredibly easy to do, so long as you left your front door open and also provided them with the security code for any alarm systems.  You would probably find this implausible and simply the proliferation of fear, uncertainty, and doubt.

That’s precisely what happened last week at the well-respected Black Hat security conference in Las Vegas when researchers from the Israel Institute of Technology and Advanced Defense Systems, Ltd. presented their findings of a serious vulnerability present in OSPF.  So serious in fact, the researchers stated the only way to properly mitigate the threat, short of fixing the protocol, is to switch to another routing protocol such as RIP or IS-IS.

The researchers went on to present their research of two previously undiscovered methods of poisoning routing information within an OSPF domain such that bogus information could be propagated and used for a multitude of malicious purposes.

The first attack allows for the creation of a remote false adjacency, whereby an unsuspecting victim router goes on to form an adjacency with a phantom router.  Once this adjacency is formed, the attacker can then flood the victim router with bogus Link-State Advertisements (LSAs).

The second attack allows the flooding of what are known as Disguised LSAs, whereby a victim router would accept spoofed LSAs which it believed were generated by another router within the OSPF domain.

While both of these previously unknown attack vectors might raise some eyebrows and create cause for concern, the researchers made some pretty big assumptions regarding the ability to perform such an attack.  The assumptions were that in order to perform such an attack, one would first need to insert a router into the network, and once this was accomplished they would then somehow need to obtain the MD5 authentication key that is widely used to secure the communications channel between OSPF neighbors.

Without a doubt, there is some merit to what these researchers have found and to a protocol junkie like myself I am always interested to learn the inner-workings of these protocols.  However, what they’ve completely overlooked is the first tenet of information security which is to ensure the physical environment is properly secured.  In most environments, gaining physical access to the network might prove difficult enough.  Nonetheless, if an attacker were able to insert such a device, the net effect would be that such attacks would be short-lived as it would be fairly trivial for a network administrator to isolate the offending device and remove it from the network.

Furthermore, and perhaps the biggest barrier to entry, they’ve failed to recognize that the likelihood of someone obtaining the authentication key.  While MD5 is known to have some serious deficiencies and vulnerable to collision attacks, it still represents a pretty high bar for the would-be attacker.  And various vendors are already rolling out HMAC-SHA for the authentication function within OSPFv2.

In addition, the researchers admitted that in their testing of the first exploit, the victim router tore down the adjacency to the phantom router after 125 seconds because the victim router did not receive the proper acknowledgement packets from the phantom router.  Once again, the net effect is that if the attackers were able to meet the first two assumptions, their attack would still be relatively short-lived.

While OSPF is widely used throughout the Internet as a whole, you probably have better odds of winning the lottery than actually experiencing an attack such as these researchers have demonstrated.  The researchers presented some pretty bold claims, but at the end of the day didn’t really present anything that wasn’t already widely known about the vulnerabilities inherent within the OSPF protocol given the assumptions the researchers made.  Their assumptions are based on two barriers to entry, which doesn’t make it impossible, but nonetheless incredibly difficult and highly implausible.  In my experience, attackers tend to move onto far easier targets.

There are a lot of things to worry about when it comes to protecting your corporate assets, but in my humble opinion, this is not one of them.  So rather than ripping out OSPF as these researchers would have you believe, you might perhaps be better served by taking a second look at your environment and ensuring the physical premises are properly secured.

Juniper SRX Tips :: Altering Default-Deny Behavior

In our previous article, we looked at using apply-groups to alter all the security policies uniformly on an SRX device such that they would all have an implicit logging statement. And while this is fine for all existing policies, it doesn’t log traffic which doesn’t match any explicitly defined security policy.

The reason for this is due to the fact that in Junos, traffic which doesn’t match an explicitly defined security policy matches against the default-deny policy.  However, given the fact that the default-deny policy is implicitly defined, apply-group configurations are of little benefit as apply-groups can only be inherited by those elements which have been explicitly defined.

Often in these cases, administrators will simply choose to create their own deny policies with the desired options and place this deny policy as the last policy for traffic going from one zone to another. However, in instances where there are many zones, it might prove too cumbersome and time consuming to manually configure this to accommodate all zones.

Clearly it would be more beneficial to have something akin to the Global Zone in ScreenOS which can be used to match on all traffic which doesn’t match against any of the explicitly defined security policies.  However, at the time of this writing, Global Zone functionality doesn’t exist in Junos.

The good news is that we can use the power of apply-groups once again to our benefit, this time to create an explicitly defined deny policy which will be inherited at the tail-end of all security policies defined within our configuration. Note that this will encompass both Inter-zone as well as Intra-zone traffic.

For this example, let’s assume that we want to log everything that would normally hit the default-deny policy. Let’s start by taking a look at our baseline configuration:

root@ce-1# show security policies
from-zone Trust to-zone Untrust {
    policy allow-outbound {
        match {
            source-address Users-subnet;
            destination-address any;
            application any;
        }
        then {
            permit;
        }
    }
}
from-zone Untrust to-zone Trust {
    policy allow-web {
        match {
            source-address any;
            destination-address web-server;
            application junos-http;
        }
        then {
            permit;
        }
    }
}

Here you can see we have a policy allowing all traffic outbound from the Users-subnet in the Trust zone towards the Untrust zone, and another policy allowing inbound HTTP traffic from the Untrust zone towards the Web Server in the Trust zone.  Now, in order to change the default-deny behavior and add additional options, we will use an apply-group to inherit a new policy at the tail-end of all previously defined policies, as follows:

groups {
    default-log {
        security {
            policies {
                from-zone <*> to-zone <*> {
                    policy log-all-else {
                        match {
                            source-address any;
                            destination-address any;
                            application any;
                        }
                        then {
                            deny;
                            log {
                                session-init;
                            }
                        }
                    }
                }
            }
        }
    }
}

Finally, let’s apply our apply-group at the [security policies] stanza within our configuration:

root@ce-1# set security policies apply-groups default-log

Now that we’ve completed the configuration, let’s examine the results of the application of our apply-group by taking a look at our security policies, this time by displaying the inherited configuration:

root@ce-1# show security policies | display inheritance
apply-groups default-log
from-zone Trust to-zone Untrust {
    policy allow-outbound {
        match {
            source-address Users-subnet;
            destination-address any;
            application any;
        }
        then {
            permit;
        }
    }
    ##
    ## 'log-all-else' was inherited from group 'default-log'
    ##
    policy log-all-else {
        ##
        ## 'match' was inherited from group 'default-log'
        ##
        match {
            ##
            ## 'any' was inherited from group 'default-log'
            ##
            source-address any;
            ##
            ## 'any' was inherited from group 'default-log'
            ##
            destination-address any;
            ##
            ## 'any' was inherited from group 'default-log'
            ## Warning: application or application-set must be defined
            ##
            application any;
        }
        ##
        ## 'then' was inherited from group 'default-log'
        ##
        then {
            ##
            ## 'deny' was inherited from group 'default-log'
            ##
            deny;
            ##
            ## 'log' was inherited from group 'default-log'
            ##
            log {
                ##
                ## 'session-init' was inherited from group 'default-log'
                ##
                session-init;
            }
        }
    }
}
from-zone Untrust to-zone Trust {
    policy allow-web {
        match {
            source-address any;
            destination-address web-server;
            application junos-http;
        }
        then {
            permit;
        }
    }
    ##
    ## 'log-all-else' was inherited from group 'default-log'
    ##
    policy log-all-else {
        ##
        ## 'match' was inherited from group 'default-log'
        ##
        match {
            ##
            ## 'any' was inherited from group 'default-log'
            ##
            source-address any;
            ##
            ## 'any' was inherited from group 'default-log'
            ##
            destination-address any;
            ##
            ## 'any' was inherited from group 'default-log'
            ## Warning: application or application-set must be defined
            ##
            application any;
        }
        ##
        ## 'then' was inherited from group 'default-log'
        ##
        then {
            ##
            ## 'deny' was inherited from group 'default-log'
            ##
            deny;
            ##
            ## 'log' was inherited from group 'default-log'
            ##
            log {
                ##
                ## 'session-init' was inherited from group 'default-log'
                ##
                session-init;
            }
        }
    }
}

Once again, with just a couple of lines of code we can streamline the configuration to a large extent, in this case creating an explicitly defined deny policy which logs all traffic that would otherwise be silently discarded.  And best of all, we can do so without having to resort to manual configuration of each and every one.

In small installations this technique might be of little benefit, but in larger implementations consisting of dozens of zones with a combination of Interzone and Intrazone and bidirectional security policies, the benefit of such an approach cannot be understated.  Not only will this ease configuration burden, but it will ensure that all traffic which doesn’t match any of the existing security policies will be handled in a consistent manner.  Of course, as with previous examples, if there are certain policies that we don’t want to inherit this new default-deny, we can simply utilize the apply-group-except statement for each of those respective policies.

In our next article we will examine changing the built-in Junos application defaults so that we can customize timers and other parameters.

Users-subnet

Network Architecture and Design, and Information Security Best Practices