Interview with John Kindervag, the Godfather of Zero Trust Networking

Last month, I had the pleasure of spending a few minutes with John Kindervag, the industry-described “Godfather” and thought leader behind Zero Trust Networking. John developed these concepts during his tenure as Vice President and Principal Analyst at Forrester Research.

Zero Trust, rooted in the principle of “never trust, always verify,” is primarily designed to address the threats of lateral movement within the network by utilizing micro-segmentation and by redefining the perimeter as that based on user, data and location.

We are at an inflection point in the industry where Zero Trust principles are starting to take hold, and many organizations are adopting these principles in order to adopt a stronger security posture. John eloquently describes these principles in this interview.

John’s Twitter – https://twitter.com/Kindervag

John’s LinkedIn – https://www.linkedin.com/in/john-kindervag-40572b1/

More background on Zero Trust architecture – https://www.paloaltonetworks.com/cyberpedia/what-is-a-zero-trust-architecture

Juniper SRX Tips :: Altering Default-Deny Behavior

In our previous article, we looked at using apply-groups to alter all the security policies uniformly on an SRX device such that they would all have an implicit logging statement. And while this is fine for all existing policies, it doesn’t log traffic which doesn’t match any explicitly defined security policy.

The reason for this is due to the fact that in Junos, traffic which doesn’t match an explicitly defined security policy matches against the default-deny policy.  However, given the fact that the default-deny policy is implicitly defined, apply-group configurations are of little benefit as apply-groups can only be inherited by those elements which have been explicitly defined. Continue reading “Juniper SRX Tips :: Altering Default-Deny Behavior”

Juniper SRX Tips :: Uniform Security Policy Modification

Often there are instances where we want to affect all security policies configured on an SRX device.  For example, let’s say that we have thousands of policies configured on our firewall, and we want to enable logging for every single policy.  Obviously this would take some time if we were to do this manually on each and every individual policy, so an easier way is desired.

Continue reading “Juniper SRX Tips :: Uniform Security Policy Modification”

Reality Check: Traditional Perimeter Security is Dead!

Recently I came across a marketing event promoted by a network integrator which touted industry leading solutions to assist customers in determining “what was lurking outside their network”, as can be seen in the screenshot below. Please note all references to the company have been removed to protect the not-so-innocent.

lurking

In this day and age, it still surprises me when supposedly network savvy folks are still thinking of network security in terms of a traditional perimeter made up of firewalls or IPS devices. The truth of the matter is that the traditional perimeter vanished quite a few years ago.

Only looking at the perimeter gives the end-user a a false sense of protection. It completely fails to recognize the dangers of mobility in today’s traditional workplace environment. Users roam. They might bring in viruses or other Trojans INSIDE your network where they are free to roam unencumbered. In the worst of these cases, the perimeter is only secured in one direction, giving outbound traffic unfettered access and completely ignoring that data which might be leaked from hosts inside your network destined to hosts outside your network, as might be the case with Keyloggers or other similar types of rogue programs.

Furthermore, in today’s environment composed of virtualized machines, the line gets even blurrier which is why we are starting to see solutions from startup vendors such as Altor Networks. It’s one thing when we are dealing with physical hosts in the traditional sense, but what about the situation when you are dealing with a multitude of virtual machines on the same physical hosts which must talk to each other?

When you take a data-focused approach instead of a technology-focused approach, the problem and its solutions start to make more sense.   The perimeter should be viewed as the demarcation between the data and any I/O fabric providing connectivity between that data and some external entity. This is the domain of things like Data Loss Prevention (DLP), Network Access Control (NAC), and Virtual Hypervisor Firewalls in addition to that of traditional security devices.

trojan-horse

To deal with the realities of today, we must start to think of network security in terms of Hotels vs. Castles. In the Castle model, we have a big wall around our infrastructure. We might have a moat and some alligators, and perhaps we only lower our drawbridge for very special visitors. This model tends to keep a good majority of the enemies at bay, but it completely ignores that which might already be inside your network (think in terms of the Trojan horse as told in Virgil’s epic poem ‘The Aeneid’).

What is more commonly being employed is that of the Hotel Model.  Initially, to gain entrance into the hotel itself, we must check in with the Concierge and get our room key.  Once we have our room key, we have limited access to our own room, and perhaps some shared facilities like the pool or the gym.  In this model, we are unable to enter into a room in which we do not have access.  The key word here is LIMITED access.

An all-inclusive security posture looks at the network from a holistic point of view.  The principles of Defense-in-Depth will make evident the failings of the traditional perimeter model.  The traditional perimeter is dead.  The perimeter is wherever the data is.

Facilitating Firewall Filter Configuration in JUNOS using ‘apply-path’

Undoubtedly, one of the coolest features in JUNOS is the apply-path statement. Using apply-path, an operator can configure a prefix-list which comprises IP prefixes linked to a defined path within JUNOS. This facilitates tasks like configuring firewall filters to allow traffic from configured BGP neighbors, making them highly dynamic.

Continue reading “Facilitating Firewall Filter Configuration in JUNOS using ‘apply-path’”

Preventing DNS Fragmentation and Large DNS Packet Attacks

Often times, attackers will attempt to perform very rudimentary attacks against DNS resolvers in an attempt to cause a Denial of Service. It is not uncommon to see attackers crafting a DoS attack composed mostly of UDP packets destined to port 53 with invalid payloads containing the ‘more-fragments’ bit set. In some cases, the packets may contain the ‘non-more-fragments’ bit set with packets of specific lengths, typically larger than the average size of a normal DNS packet.

Many flow analysis tools and IDP products have the ability to look for IP fragmentation misuse based on parameters that the operator may set; these tools are invaluable as an early warning system to alert the network administrator that their infrastructure is under attack.

Insofar as being able to mitigate these types of attacks, a few simple approaches can be utilized by a network administrator in order to filter this type of traffic using simple ACLs or firewall filters on routers or other types of equipment capable of filtering at Layers 3 and 4. Normally, is is not typical to see DNS queries which are fragmented, therefore the following Juniper firewall-filter should effectively filter the fragmented packets:

term DNS-Fragments {
  from {
     destination-prefix-list {
         dns-server-prefixes;
   }
     }
     fragment-flags more-fragments;
     destination-port 53;
  }
  then {
     count dns-fragments;
     log;
     discard;
  }
}

For packets containing the ‘non-more-fragments’ bit set, or which all packets within the attack flows share a common packet size (typically this will be large, on the order of 540 bytes or larger), a network administrator can easily filter those on the router as well. Normally we should not expect to see queries of this large size, so the following could be effectively used to filter these types of attacks as well.  In this example we are filtering UDP or TCP packets destined to port 53, with a size of either 540 bytes or 1480:

term DNS-InvalidSize {
   from {
       destination-prefix-list {
          dns-server-prefixes;
    }
       packet-length [ 540 1480 ];
       destination-port 53;
   }
   then {
       count dns-InvalidSize;
      log;        
      discard;
   }
}

NOTE: It is more than likely a network administrator would need to adjust the above packet sizes after analysis of the packet size used in the attack vector using whatever flow reporting or network visibility tools are in use, since it is unlikely an attacker would use the exact same packet sizes listed in the example above.

Book Review :: Configuring NetScreen Firewalls

netscreen_firewalls

Configuring NetScreen Firewalls
by Rob Cameron
Paperback: 600 pages
Publisher: Syngress
ISBN-13: 978-1932266399

2starsBetter off waiting for a Second Edition…

I read a lot of books, and while I don’t review all of them, I am often compelled to write a review when a book stands out, either for it’s clear leadership and technical distinction in the marketplace, or for it’s extreme lack thereof. In this case, I was compelled to write the review based on the latter.

Seeing as this is the only Netscreen book on the market, I had high expectations for it. When one looks at the credentials of the numerous authors, it reads like a veritable list of leaders in the Security industry. As such, I was rather excited when I picked up this book. As I began reading this book, I quickly realized that it was not going to meet my expectations. Clearly this book was rushed to market, another sign that the primary concern of many publishers is not in producing quality, but rather quantity. This book suffers from many of the same problems I see with other books on the market with multiple contributing authors, which is that the voice isn’t consistent throughout the book. Some chapters have diagrams, screen shots, or CLI commands outlining various procedural steps, whereas these details are noticeably absent in others.

In addition, this book is littered with many errors throughout, both typographical as well as technical. In some cases, as other reviewers point out, sentences simply stop abruptly mid-sentence. The text often refers to diagrams which don’t even exist. There are numerous references to find additional information in other chapters which are non-existent.

With regards to technical content, the authors certainly could have added more detail, especially considering the number of authors who contributed to this text. For example, the chapter on Routing does a good job of telling the reader how to enable BGP, but provides no details on how to actually configure a BGP neighbor. Another example is URL filtering which is discussed in the chapter on Attack Detection and Defense. While the authors do a good job of describing the various modes to support URL filtering (redirect vs. integrated), there is no explanation of how redirection actually takes place and no diagrams to provide for comprehensive understanding of the subject matter.

I can’t blame the authors entirely for the many flaws in this book, as any decent technical editor should have been able to spot many of these errors prior to publication. One wonders whether the technical editors even read the book as many of the errors are so blatant that it’s inconceivable that so many managed to slip through. I’m disappointed in Syngress for publishing a book with so many errors, and this has definitely led me to believe that Syngress does not want to maintain a leadership position of publishing technical content of the highest magnitude, but rather they are only concerned with being the first to market with a particular product.

I will give this book 2 stars in that it is indeed a noble attempt at covering a wide array of topics, as well as for being the only book in the industry which covers this subject matter. I suggest that the authors should examine the possibility of releasing a second edition which may fix these blatant errors, as well as hiring some decent technical editors.