Track.Health : Kubernetes Network Policies — Going beyond the basics


Kubernetes is hot in the DevOps space and one of the most wanted platform among developers. It has already become the de facto deployment and orchestration tool for DevOps engineers. Container adoption has surged in recent years, with the “2019 Cloud Native Computing Foundation survey” reporting 84% of their respondents use some type of containerization in production. The same survey also found 78% of respondents use Kubernetes in production, making it a market leader.


“Kubernetes is an open-source system for automating deployment, scaling, and management of containerised applications”

In this post, we will focus on Network policies, one of the under-utilised features among the recent versions of Kubernetes. At Track.Health we leverage AWS managed service EKS as our kubernetes platform.

In a Kubernetes cluster configured with default settings, all pods can discover and communicate with each other without any restrictions. I hope you all are well versed with Kubernetes resources like Pods, Deployments, Replica Sets and Services.They are the workloads in a Kubernetes eco system.

Now let see what NetworkPolicies can be used for. The new Kubernetes object type NetworkPolicy lets you allow and block traffic to pods. NetworkPolicies are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network “entities” (we use the word “entity” here to avoid overloading the more common terms such as “endpoints” and “services”, which have specific Kubernetes connotations) over the network.

These policies are firewall rules that specify permissible types of traffic to, from and between pods. If requested, Kubernetes blocks all traffic that is not explicitly allowed. Policies are applied to groups of pods identified by common labels. Labels can then be used to mimic traditional segmented networks often used to isolate layers in a multi-tier application: You might identify your front-end and back-end pods by a specific “segment” label, for example. Policies control traffic between those segments and even traffic to or from external sources.

If you’re running multiple applications in a Kubernetes cluster or sharing a cluster among multiple teams, it’s a security best practice to create firewalls that permit pods to talk to each other while blocking other network traffic. Networking policy corresponds to the Security Groups concepts in the Virtual Machines world.

Networking Policies are implemented by networking plugins. These plugins typically install an overlay network in your cluster to enforce the Network Policies configured. A number of networking plugins, including Calico, Romana and Weave Net, support using Network Policies.

At, we installed Calico as the network plugin for network policy implementation. Calico is a network policy engine for Kubernetes. With Calico network policy enforcement, you can implement network segmentation and tenant isolation. This is useful in multi-tenant environments where you must isolate tenants from each other or when you want to create separate environments for development, staging, and production.

Below documentation from AWS is the best reference for installing Calico in EKS.

Once you install the plugin, you are ready to configure network policies in your cluster. You could still create a NetworkPolicy resource without configuring the supported network plugins, however simply creating a resource without a plugin will have no effect.

First thing is done, now lets check how to configure a network policy.

In our environment, we have multiple clusters comprising Prod and Non Prod environments. Each environment has number of namespaces say 12 custom namespaces and each of them contains more than 12 pods. The namespaces are configured in such a way that each contains micro services related to a specific business functionality. Like any other micro service architecture, most of these pods need to communicate with each other in a namespace and a few of the pods from each namespace need to be communicated outside its namespace boundaries.

We also have a namespace created for tools that need external access such Nginx and Ingress. Every namespace has a Web pod which needs access from and to Nginx pod hosted in tools namespace.

So the first step, default deny all ingress traffic. Make a manifest out of the below content and apply onto the cluster

This will deny all incoming traffic to the resources in namespace ns-1(random name) and if you look at the policyTypes, you could see its an ingress type which means its applicable for all the incoming requests. Policy types can be updated to Egress if you want to deny all outgoing requests.

In the second step, we will apply a policy to a pod hosted in namespace ns-1 to receive requests only from another pod hosted in namespace ns-2.

You can even configure incoming IP Blocks CIDR in place of podSelector, something like this

There are four kinds of selectors that can be specified in an ingress from section or egress to section:

podSelector: This selects particular Pods in the same namespace as the NetworkPolicy which should be allowed as ingress sources or egress destinations.

namespaceSelector: This selects particular namespaces for which all Pods should be allowed as ingress sources or egress destinations.

namespaceSelector and podSelector: A single to/from entry that specifies both namespaceSelector and podSelector selects particular Pods within particular namespaces.

ipBlock: This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable.

In this post we just covered ingress network policies, same principles can be applied to egress policies. We recommend focusing on Ingress policies first and later add egress policies as you move on. We will cover them in subsequent posts.