Category Archives: Data Center

The Changing Landscape of Selling in the Age of SDN

There are massive waves of technology upheaval taking place in the marketplace, causing disruption and providing a challenge to technology salespeople who are used to selling in the traditional ways. Cloud, Automation, Mobility, Adaptive Security and the Internet of Things are just a few of the major changes affecting the landscape right now. And while these technologies are certainly challenging in their own right, there is one technology that stands on it’s own, not only in terms of how technology decisions are made, but also how technology is bought.

That technology is Software Defined Networking (SDN). SDN is causing a fundamental shift in the way that technology is procured. There is a major shift away from buying point products and technologies which only meet a specific need and instead looking at the bigger picture with an aim of technology procurement fitting into a larger ecosystem that is providing broader solutions, enabling shorter ROI and better business agility.

Application-Centricity Creates New Stakeholders

The buying process used to be relatively straightforward, and different technology groups within an organization could procure technology within their own silo with little regard to how it fit within the broader ecosystem. Often times, the technology implemented would dictate and limit what applications could be run on it. Now the shift is towards an application-centric buying framework. The overarching driver is that the applications themselves are the driving force behind technology purchases, and these purchases must work in a tightly sophisticated, integrated ecosystem that consists of multiple compute, storage and networking resources. And more often than not, security is also being woven into that fabric as well. No longer can these be looked at as individual elements – furthermore, new tools that are emerging to orchestrate all these resources simultaneously are forcing the siloed buyers of the past to work cohesively with other teams and determine if technology meets the overarching requirements of various applications.

As a result, there are new stakeholders. The new stakeholders are the application architects, the cloud architects, and more commonly the DevOps teams that are responsible for overseeing the collaboration between developers and IT teams. These stakeholders will be responsible for ensuring that technology purchases are able to seamlessly integrate into a fully orchestrated system.

Adapting to the Changing Landscape

By better understanding the changing landscape of the buyer in the broader ecosystem, we must understand that we can no longer sell individual pieces of the puzzle. We must understand the whole puzzle itself and be able to sell the various solutions collectively which answer the needs of the buyers. Understanding that the whole is greater than the sum of all of its parts. Salespeople and technology professionals must be well versed in a diverse range of technologies in order to be able to speak to the technology buyer, with a goal of providing broader solutions that meet the “application-centric” nature that these buyers demand.

Furthermore, as solutions move towards a software-based model, there are implications which need to be understood as increasingly business will being conducted on a “subscription-based” model. This means that we need to understand how to maintain revenue streams in light of the fact that subscription based pricing models and the revenue associated with it are quite different compared to traditional revenue streams.

It also affects the channel in that various channels must come together to deliver solutions to their customers that integrate cohesively in a much larger framework. It’s no longer acceptable to simply deploy gear based on “speeds and feeds”. The channel needs to demonstrate that their solutions work cohesively in a larger ecosystem, and demonstrate that they have strong partnerships to meet these needs. These partnerships are going to be increasingly important as customers tie together components from multiple vendors and expect a seamless, integrated and highly orchestrated ecosystem.

In order to capitalize on this changing landscape and maintain revenue streams, a different approach needs to be taken with customers. First off will be identifying the new stakeholders. Once the stakeholders are identified, they will need to be approached with an overarching vision of their business problems and needs, with a focus on the broader solutions encompassing a diverse set of technologies that need to work cohesively. The successful technology salesperson of tomorrow will assist their customers by showing them fully orchestrated solutions that meet their needs while at the same time driving down costs, enabling them to be more agile and accomplish more in a shorter time.

Book Review :: Juniper QFX5100 Series: A Comprehensive Guide to Building Next-Generation Networks

Juniper QFX5100 Series: A Comprehensive Guide to Building Next-Generation Networks
by Douglas Richard Hanks, Jr.
Paperback: 310 pages
Publisher: O’Reilly Media
ISBN-13: 978-1491949573

5stars

Much more than just a book about the QFX5100

This was an easy weekend read, and quite honestly I’d never thought I’d say this about a technical book but I literally could not put the book down. Doug has amassed a wealth of great information, approaching the subject matter from a standpoint of brevity, applying the Goldilocks principle — not too much and not too little — but rather just the right amount of information.

Do not be confused by the title — this is not JUST a book about the QFX5100 series. As the subtitle might indicate, it’s more of a book on building next-gen networks, specifically Data Center networks, and serves as a fantastic primer on the various protocols and technologies that are becoming the mainstay of this world.

As the networking world works overtime to catch up to the virtualization offered by storage and compute resources, the reader tasked with creating the network of tomorrow will appreciate the coverage of building various types of fabrics of varying dimensions — whether it’s coverage of Juniper’s Virtual Chassis Fabric for building small to medium sized Ethernet Fabrics, or Clos Fabrics for building extremely large IP underlay networks, the coverage is top notch.  Readers will get a thorough introduction to the concepts of VXLAN and overlay networking with VTEPs using controllers such as Juniper’s Contrail or VMware’s NSX and their respective control plane signaling mechanisms such as EVPN and OVSDB.

I sincerely appreciated the in-depth coverage of the architecture of the QFX 5100 Series, the Broadcom Trident II chipset, as well as an inside look at the control plane virtualization that takes place on the QFX 5100 itself (apparently, Juniper is really taking virtualization to heart).  I also enjoyed the chapter on Performance and Scaling which covered the options for modifying latency throughout the box (cut-through vs. store-and-forward) as well as options for tailoring the Unified Forwarding Table to fit the needs of individual networks. The chapter on Network Automation is also a nice addition, with coverage of various automation tools such as PyEZ, Ansible, Puppet and Chef, just to name a few.

The astute reader familiar with Juniper’s website  will recognize that a few of the chapters comprising this book are borrowed from various white papers that Doug has authored – however, all in all, there is quite a bit more information in this book than can be gleaned  from resources on Juniper’s public facing collateral. There were a few minor grammatical and technical inconsistencies (largely text that didn’t match diagrams)… however this did not  detract from the overall value of the book and I can only ascribe this to the fact that Doug did not use me as a technical editor on this book. <hint hint>

Last but not least, although not specifically mentioned, I do believe this book as well his other QFX10000 book will prove to be invaluable resources for anyone preparing for the JNCDS-DC, JNCIP-DC, or the upcoming JNCIE-DC exams, as I strongly believe that technical content from all three exams will be likely be covered here.

All in all, an excellent resource, and one that I am sure to reference regularly in my day to day engagements working with customers building out next-generation data center networks. I thoroughly enjoyed reading this book and am looking forward to reading his recent book on the QFX10000 series.

 

Is DDoS Mitigation as-a-Service Becoming a Defacto Offering for Providers?

Republished from Corero DDoS Blog:

It’s well known in the industry that DDoS attacks are becoming more frequent and increasingly debilitating, turning DDoS mitigation into a mission critical initiative. From the largest of carriers to small and mid-level enterprises, more and more Internet connected businesses are becoming a target of DDoS attacks. What was once a problem that only a select few dealt with is now becoming a regularly occurring burden faced by network operators.

In my daily engagements with various customers of all shapes and sizes, it’s truly interesting to see how the approach to DDoS mitigation is changing. Much of this is the result of DDoS mitigation services shifting from a “nice to have” technology to a “must-have”, essential in order to maintain business continuity and availability.

When I built DDoS mitigation and detection services for Verizon back in 2004, the intent was to offer value-add revenue producing services to offer subscribers, in an effort to build out our security offerings. For many years, this concept was one that pretty much every provider I worked with was looking into; build a service with the intent of generating new revenue opportunity from customers when traditional avenues such as simple connectivity and bandwidth offerings were contracting.

However, in the past several months, as I interact with large scale carriers to data center hosting providers, I am seeing a common thread starting to emerge – that is, attracting new customers and retaining existing ones is becoming more difficult in the absence of differentiated value. Compounding this issue is that the customers are starting to expect some of these services as part of their connectivity fees.  What I’m seeing is more and more providers investigating the option of offering DDoS mitigation services to their customers as a virtue of being connected to them, in an effort to attract them away from other providers who have limited service offerings and capabilities.

Could it be that DDoS mitigation services become a standard offering on a provider’s network? Is it feasible that at some point in the future DDoS mitigation will become an inherent capability provided by the service providers?

In order for this approach to become a reality, the economics of the game have to change. Inserting DDoS mitigation elements into the network need to be reasonably inexpensive in order for carriers and hosting providers to justify the cost. The technology also needs to be simple and as close to automatic as possible, as an inherent service offering will not justify the huge expense and uplift of having a team of operations personnel managing the service. Attacks need to be mitigated dynamically and quickly and without the need for manual intervention or the requirement to have to pick up a phone to get assistance. And lastly, whatever mechanisms are in place need to ensure a “do no harm” approach and that there is no collateral damage to good traffic.

At Corero, we believe that we are doing just that; changing not only the economics of the game, but also by fundamentally looking at the problem is a different way. Corero enables real-time, algorithmic identification of network anomalies and subsequent mitigation of the attack traffic, eliminating the DDoS challenge before attacks transit the network and ultimately impact downstream customers.

This concept is realized through, dynamic mitigation bandwidth licensing – a new economic model which represents the use of high scalable DDoS mitigation technology. The ability to modernize DDoS protection, specifically taking advantage of always-on DDoS mitigation through emerging and proven deployment models; such as dedicated in-line deployment of appliance based DDoS mitigation at the peering and transit points is becoming a more common practice with the help of Corero Network Security.

Tomorrow’s Data Center – Realizing the Potential of Virtualization

I’m not sure what the Data Center of tomorrow will look like, but one thing is for sure, it will most likely look a whole lot different than it does today.

We’ve seen the benefits that server virtualization brings by creating a level of abstraction between the software and the underlying hardware. But in order for the true vision of virtualization to take hold, the physical resources will themselves need to become unhinged from the physical chassis that we think of today when we think of a traditional server – be it a rackmountable server, a blade chassis, or some other type of form factor. We need to stop thinking of resources in the manner we’ve become accustomed – as a physically constrained device with finite boundaries and start thinking beyond – towards pools of resources.

Instead of the form factor of a server which consists of memory and CPU and I/O interfaces for network and storage, all of these components will need be removed and something altogether different will need to take its place. We need to start thinking in terms of blades or insertable modules which consist solely of a given set of resources (memory, compute, storage I/O, etc.). We can start pooling all these resources together to create individual compute farms or memory farms, or farms of any other type of resource with which we want the fluid ability to add capacity on the fly. Additional compute resources could be gained simply by adding another compute module to the compute farm. More memory could be added to the system by simply adding another memory blade to the memory farm. Additional bandwidth could be added to the system by simply adding another blade consisting of high density Ethernet ports to the network farm. Not only will this enable greater agility in our data center, it will also enable each resource to be scaled independently of each other – something that is difficult today given the fixed constraint server architecture where most of these resources currently exist.  And yet, even though we are creating individual farms of resources, the virtualization capabilities present in the Data Center of tomorrow will allow the network administrator to pick and choose which subsets of these resources to combine together to present to individual VMs.

Another area where we might see considerable change is to the traditional rack itself. Today the rack doesn’t offer much in the way of intelligence, it’s really just a mechanism we use to organize and situate our resources. Is it possible that the rack itself might evolve to become an underlying fabric which ties together all of these different resource pools and presents them to the resource virtualization layer in a unified fashion? Perhaps the fabric incorporated into the rack will provide the necessary connectivity between all the elements in a fashion similar to what is provided today via a traditional motherboard. Developments by Intel with their Light Peak technology might ultimately provide the high speed connectivity required in such a design.  It’s possible that even good old Ethernet might provide that connectivity as we can see is happening with FCoE and Remote Direct Memory Access over Converged Enhanced Ethernet.

And finally, something that might look a whole lot different tomorrow is the physical alignment of that rack – with racks being turned on their sides in a horizontal fashion to assist with cable management and ease of resource access. The cooling of the equipment might be done by liquid rather than traditional air current, as is being developed by companies like Green Revolution Cooling. Their GreenDEF coolant is essentially made from non-toxic mineral oil, and is not electrically conductive like water. Green Revolution Cooling’s promise is that their cooling solutions offers dramatically lower up-front costs for greenfield deployments, lower energy use, and higher performance than that gained with traditional cooling systems, with up to 45% less power and up to 95% less traditional cooling required.

Ultimately, in order for the vision of virtualization to realize it’s true potential, the physical mold we have become accustomed to will have to be broken. Resources will need to be pooled together and this just isn’t effective or scalable within the confines of the traditional server chassis as we know it today. On top of that, green computing and the desire to save money and reduce carbon footprint, coupled with efficiencies that might be gained by changing the alignment of racks and resources are going to result in a Data Center that looks fundamentally different than it does today.