Interview with Juniper Networks Ambassador Dan Hearty

First of all, let me apologize for the long delay in getting this next Juniper Ambassador interview published. I had a death in the family in December and it consumed a good portion of my time these last two months, so I’ve had little time for social media activities.

Alas, I’m very excited about this next interview – I spent time with one of my favorite Juniper Ambassadors, Dan Hearty (JNCIE-DC #190) at the Juniper NXTWORK 2018 conference in Las Vegas. Dan is just an awesome guy all around, a brilliant technologist, and one of our most active Ambassadors.

We discussed his contribution to the 2018 Juniper Ambassador’s Cookbook, Contrail Enterprise Multicloud and his interest around VMware NSX integration with Juniper’s Data Center products, emerging trends in the Data Center space, specifically around network virtualization, overlays and abstration, and his life as an Ambassador.

Dan’s Twitter – https://twitter.com/danhearty

Dan’s LinkedIn – https://www.linkedin.com/in/daniel-hearty-b6a38218/

Dan’s Blog – http://danhearty.net

Juniper Data Center Cohesion Wrap Up

Finally getting caught up with work since my time in Cali and although this post is a little late, I wanted to take a moment to capture my thoughts on Juniper Network’s Data Center Cohesion event which took place in Sunnyvale at Juniper’s Aspiration Dome last week. I had an awesome time and the sessions were full of a ton of useful information geared towards all things Data Center. Apparently this was the first time that Juniper opened the conference to outside partners — roughly 16 partners showed up, so to say that I was honored to be there is quite an understatement.

Continue reading “Juniper Data Center Cohesion Wrap Up”

Understanding the Federal Government’s Rapid Adoption of Cloud

One of the biggest trends in the IT industry at the moment is enterprise adoption of the public cloud. It’s obvious that it’s been accelerating over the recent years, but perhaps more importantly, Federal agencies are beginning to take a serious look at the cloud in an effort to reform IT and to achieve greater agility and operational efficiencies. In fact, according to recent data by various integrators and tech companies, U.S. government agencies are moving to the cloud and away from their own in-house data centers faster than private corporations.

According to Gartner and IDC, if you look at the capital expenditures, approximately $392 billion in total IT spend on the cloud will take place in 2017. That’s a massive number, and Federal agencies are starting to represent a big proportion of that:

Continue reading “Understanding the Federal Government’s Rapid Adoption of Cloud”

An Inside Look at Juniper Networks’ Forthcoming JNCIE-DC Exam

Data Centers and the Cloud are all the rage right now, and Juniper has been at the forefront of the Data Center revolution from the very beginning – early on with their introduction of the QFX and the much maligned QFabric, and more recently with the addition of Virtual Chassis Fabric (VCF), various open architectures for creating IP Clos Fabrics, and even advanced features such as Junos Fusion for the Data Center which collapse and simplify the deployment and management of a large number of Ethernet switches.

[pullquote]The JNCIP-DC is currently rated as the fifth hottest Data Center certification by Tom’s IT Pro, an online resource tracking the demand of various industry certifications.[/pullquote]The folks at the Juniper Networks Technical Certification Program (JNTCP) have not been far behind, creating a Data Center track and releasing a new certification, the Juniper Networks Certified Professional Data Center (JNCIP-DC). I’ve been following the developments within the Data Center track for a while now, and you could imagine my delight when I saw the following a few months back on Juniper’s Certification portal:

Continue reading “An Inside Look at Juniper Networks’ Forthcoming JNCIE-DC Exam”

Is DDoS Mitigation as-a-Service Becoming a Defacto Offering for Providers?

Republished from Corero DDoS Blog:

It’s well known in the industry that DDoS attacks are becoming more frequent and increasingly debilitating, turning DDoS mitigation into a mission critical initiative. From the largest of carriers to small and mid-level enterprises, more and more Internet connected businesses are becoming a target of DDoS attacks. What was once a problem that only a select few dealt with is now becoming a regularly occurring burden faced by network operators.

Continue reading “Is DDoS Mitigation as-a-Service Becoming a Defacto Offering for Providers?”

Tomorrow’s Data Center – Realizing the Potential of Virtualization

I’m not sure what the Data Center of tomorrow will look like, but one thing is for sure, it will most likely look a whole lot different than it does today.

We’ve seen the benefits that server virtualization brings by creating a level of abstraction between the software and the underlying hardware. But in order for the true vision of virtualization to take hold, the physical resources will themselves need to become unhinged from the physical chassis that we think of today when we think of a traditional server – be it a rackmountable server, a blade chassis, or some other type of form factor. We need to stop thinking of resources in the manner we’ve become accustomed – as a physically constrained device with finite boundaries and start thinking beyond – towards pools of resources.

Instead of the form factor of a server which consists of memory and CPU and I/O interfaces for network and storage, all of these components will need be removed and something altogether different will need to take its place. We need to start thinking in terms of blades or insertable modules which consist solely of a given set of resources (memory, compute, storage I/O, etc.). We can start pooling all these resources together to create individual compute farms or memory farms, or farms of any other type of resource with which we want the fluid ability to add capacity on the fly. Additional compute resources could be gained simply by adding another compute module to the compute farm. More memory could be added to the system by simply adding another memory blade to the memory farm. Additional bandwidth could be added to the system by simply adding another blade consisting of high density Ethernet ports to the network farm. Not only will this enable greater agility in our data center, it will also enable each resource to be scaled independently of each other – something that is difficult today given the fixed constraint server architecture where most of these resources currently exist.  And yet, even though we are creating individual farms of resources, the virtualization capabilities present in the Data Center of tomorrow will allow the network administrator to pick and choose which subsets of these resources to combine together to present to individual VMs.

Another area where we might see considerable change is to the traditional rack itself. Today the rack doesn’t offer much in the way of intelligence, it’s really just a mechanism we use to organize and situate our resources. Is it possible that the rack itself might evolve to become an underlying fabric which ties together all of these different resource pools and presents them to the resource virtualization layer in a unified fashion? Perhaps the fabric incorporated into the rack will provide the necessary connectivity between all the elements in a fashion similar to what is provided today via a traditional motherboard. Developments by Intel with their Light Peak technology might ultimately provide the high speed connectivity required in such a design.  It’s possible that even good old Ethernet might provide that connectivity as we can see is happening with FCoE and Remote Direct Memory Access over Converged Enhanced Ethernet.

And finally, something that might look a whole lot different tomorrow is the physical alignment of that rack – with racks being turned on their sides in a horizontal fashion to assist with cable management and ease of resource access. The cooling of the equipment might be done by liquid rather than traditional air current, as is being developed by companies like Green Revolution Cooling. Their GreenDEF coolant is essentially made from non-toxic mineral oil, and is not electrically conductive like water. Green Revolution Cooling’s promise is that their cooling solutions offers dramatically lower up-front costs for greenfield deployments, lower energy use, and higher performance than that gained with traditional cooling systems, with up to 45% less power and up to 95% less traditional cooling required.

Ultimately, in order for the vision of virtualization to realize it’s true potential, the physical mold we have become accustomed to will have to be broken. Resources will need to be pooled together and this just isn’t effective or scalable within the confines of the traditional server chassis as we know it today. On top of that, green computing and the desire to save money and reduce carbon footprint, coupled with efficiencies that might be gained by changing the alignment of racks and resources are going to result in a Data Center that looks fundamentally different than it does today.