ShortestPathFirst Network Architecture and Design, and Information Security Best Practices


Is DDoS Mitigation as-a-Service Becoming a Defacto Offering for Providers?

Written by Stefan Fouant

Republished from Corero DDoS Blog:

It’s well known in the industry that DDoS attacks are becoming more frequent and increasingly debilitating, turning DDoS mitigation into a mission critical initiative. From the largest of carriers to small and mid-level enterprises, more and more Internet connected businesses are becoming a target of DDoS attacks. What was once a problem that only a select few dealt with is now becoming a regularly occurring burden faced by network operators.

In my daily engagements with various customers of all shapes and sizes, it’s truly interesting to see how the approach to DDoS mitigation is changing. Much of this is the result of DDoS mitigation services shifting from a “nice to have” technology to a “must-have”, essential in order to maintain business continuity and availability.

When I built DDoS mitigation and detection services for Verizon back in 2004, the intent was to offer value-add revenue producing services to offer subscribers, in an effort to build out our security offerings. For many years, this concept was one that pretty much every provider I worked with was looking into; build a service with the intent of generating new revenue opportunity from customers when traditional avenues such as simple connectivity and bandwidth offerings were contracting.

However, in the past several months, as I interact with large scale carriers to data center hosting providers, I am seeing a common thread starting to emerge - that is, attracting new customers and retaining existing ones is becoming more difficult in the absence of differentiated value. Compounding this issue is that the customers are starting to expect some of these services as part of their connectivity fees.  What I’m seeing is more and more providers investigating the option of offering DDoS mitigation services to their customers as a virtue of being connected to them, in an effort to attract them away from other providers who have limited service offerings and capabilities.

Could it be that DDoS mitigation services become a standard offering on a provider’s network? Is it feasible that at some point in the future DDoS mitigation will become an inherent capability provided by the service providers?

In order for this approach to become a reality, the economics of the game have to change. Inserting DDoS mitigation elements into the network need to be reasonably inexpensive in order for carriers and hosting providers to justify the cost. The technology also needs to be simple and as close to automatic as possible, as an inherent service offering will not justify the huge expense and uplift of having a team of operations personnel managing the service. Attacks need to be mitigated dynamically and quickly and without the need for manual intervention or the requirement to have to pick up a phone to get assistance. And lastly, whatever mechanisms are in place need to ensure a “do no harm” approach and that there is no collateral damage to good traffic.

At Corero, we believe that we are doing just that; changing not only the economics of the game, but also by fundamentally looking at the problem is a different way. Corero enables real-time, algorithmic identification of network anomalies and subsequent mitigation of the attack traffic, eliminating the DDoS challenge before attacks transit the network and ultimately impact downstream customers.

This concept is realized through, dynamic mitigation bandwidth licensing - a new economic model which represents the use of high scalable DDoS mitigation technology. The ability to modernize DDoS protection, specifically taking advantage of always-on DDoS mitigation through emerging and proven deployment models; such as dedicated in-line deployment of appliance based DDoS mitigation at the peering and transit points is becoming a more common practice with the help of Corero Network Security.


Tomorrow’s Data Center – Realizing the Potential of Virtualization

Written by Stefan Fouant

I'm not sure what the Data Center of tomorrow will look like, but one thing is for sure, it will most likely look a whole lot different than it does today.

We've seen the benefits that server virtualization brings by creating a level of abstraction between the software and the underlying hardware. But in order for the true vision of virtualization to take hold, the physical resources will themselves need to become unhinged from the physical chassis that we think of today when we think of a traditional server - be it a rackmountable server, a blade chassis, or some other type of form factor. We need to stop thinking of resources in the manner we've become accustomed - as a physically constrained device with finite boundaries and start thinking beyond - towards pools of resources.

Instead of the form factor of a server which consists of memory and CPU and I/O interfaces for network and storage, all of these components will need be removed and something altogether different will need to take its place. We need to start thinking in terms of blades or insertable modules which consist solely of a given set of resources (memory, compute, storage I/O, etc.). We can start pooling all these resources together to create individual compute farms or memory farms, or farms of any other type of resource with which we want the fluid ability to add capacity on the fly. Additional compute resources could be gained simply by adding another compute module to the compute farm. More memory could be added to the system by simply adding another memory blade to the memory farm. Additional bandwidth could be added to the system by simply adding another blade consisting of high density Ethernet ports to the network farm. Not only will this enable greater agility in our data center, it will also enable each resource to be scaled independently of each other - something that is difficult today given the fixed constraint server architecture where most of these resources currently exist.  And yet, even though we are creating individual farms of resources, the virtualization capabilities present in the Data Center of tomorrow will allow the network administrator to pick and choose which subsets of these resources to combine together to present to individual VMs.

Another area where we might see considerable change is to the traditional rack itself. Today the rack doesn't offer much in the way of intelligence, it's really just a mechanism we use to organize and situate our resources. Is it possible that the rack itself might evolve to become an underlying fabric which ties together all of these different resource pools and presents them to the resource virtualization layer in a unified fashion? Perhaps the fabric incorporated into the rack will provide the necessary connectivity between all the elements in a fashion similar to what is provided today via a traditional motherboard. Developments by Intel with their Light Peak technology might ultimately provide the high speed connectivity required in such a design.  It's possible that even good old Ethernet might provide that connectivity as we can see is happening with FCoE and Remote Direct Memory Access over Converged Enhanced Ethernet.

And finally, something that might look a whole lot different tomorrow is the physical alignment of that rack - with racks being turned on their sides in a horizontal fashion to assist with cable management and ease of resource access. The cooling of the equipment might be done by liquid rather than traditional air current, as is being developed by companies like Green Revolution Cooling. Their GreenDEF coolant is essentially made from non-toxic mineral oil, and is not electrically conductive like water. Green Revolution Cooling's promise is that their cooling solutions offers dramatically lower up-front costs for greenfield deployments, lower energy use, and higher performance than that gained with traditional cooling systems, with up to 45% less power and up to 95% less traditional cooling required.

Ultimately, in order for the vision of virtualization to realize it's true potential, the physical mold we have become accustomed to will have to be broken. Resources will need to be pooled together and this just isn't effective or scalable within the confines of the traditional server chassis as we know it today. On top of that, green computing and the desire to save money and reduce carbon footprint, coupled with efficiencies that might be gained by changing the alignment of racks and resources are going to result in a Data Center that looks fundamentally different than it does today.