Hey there, welcome to yet another instance of the wonderful networking world of Kubernetes. Today I will explore some new cool stuff that recently came to Azure Kubernetes Service (AKS), plus one thing I did not have in previous blogs.
First things first, this is a blog series, you can find previous installments here:
- Part 1: deep dive in AKS with Azure CNI in your own vnet
- Part 2: deep dive in AKS with kubenet in your own vnet, and ingress controllers
- Part 3: outbound connectivity from AKS pods
- Part 4 (this one): NSGs with Azure CNI clusters
- Part 5: Virtual Node
- Part 6: Network Policy with Azure CNI
Since network policy and virtual node are only supported using the Azure CNI plugin, that is where I will focus in this blog. First things first: after provisioning a Resource Group and a Virtual Network with at least one subnet, you are ready to deploy your AKS cluster. I used this command:
az aks create -g $rg -n $aksname_azure -c 1 -s $vmsize -k $k8sversion \ --service-principal $appid --client-secret $appsecret \ --admin-username $adminuser --ssh-key-value "$sshpublickey" \ --network-plugin azure --vnet-subnet-id $subnetid \ --enable-addons monitoring --workspace-resource-id $wsid \ --network-policy $nwpolicy \ --no-wait
BTW, I store some of the variables I use above (such as the SP credentials or my SSH public key) in my Azure Key Vault.
After provisioning the cluster, you can deploy the virtual node add-on. You will probably need the corresponding CLI extension. See here for more information:
az aks enable-addons -g $rg -n $aksname_azure --addons virtual-node --subnet-name $subnet_aci
The NSG
In my deployment, the subnet where I deployed AKS (variable $subnetid in my example above) is not associated to any NSG, but if you have a look at the Resource Group where the actual nodes are deployed, there is an NSG there.
noderg_azure=$(az aks show -g $rg -n $aksname_azure --query nodeResourceGroup -o tsv) az network nsg list -g $noderg_azure -o table
You can verify that it is not attached to the AKS subnet:
$ az network vnet subnet show -g $rg --vnet-name $vnet -n $subnet_azure --query networkSecurityGroup $
But it is indeed attached to the individual NICs of the AKS nodes (in this case I only have one node):
$ az network nic list -g $noderg_azure --query [].[name,networkSecurityGroup.id] -o tsv aks-nodepool1-26711606-nic-0 /subscriptions/e7da9914-9b05-4891-893c-546cb7b0422e/resourceGroups/MC_akstest_azurecnicluster_westeurope/providers/Microsoft.Network/networkSecurityGroups/aks-agentpool-26711606-nsg
Let’s have a look at the rules in the NSG:
$ nsgname_azure=$(az network nsg list -g $noderg_azure --query [0].name -o tsv) $ az network nsg rule list -g $noderg_azure --nsg-name $nsgname_azure []
Oh, it looks very empty! Actually there are a couple of default rules included, visible if using the –include-default flag:
$ az network nsg rule list -g $noderg_azure --nsg-name $nsgname_azure --include-default -o table Name ResourceGroup Priority SourcePortRanges SourceAddressPrefixes SourceASG Access Protocol Direction DestinationPortRanges DestinationAddressPrefixes DestinationASG ----------------------------- ------------------------------------- ---------- ------------------ ----------------------- ----------- -------- ---------- ----------- ----------------------- ---------------------------- ---------------- AllowVnetInBound MC_akstest_azurecnicluster_westeurope 65000 * VirtualNetwork None Allow * Inbound * VirtualNetwork None AllowAzureLoadBalancerInBound MC_akstest_azurecnicluster_westeurope 65001 * AzureLoadBalancer None Allow * Inbound * * None DenyAllInBound MC_akstest_azurecnicluster_westeurope 65500 * * None Deny * Inbound * * None AllowVnetOutBound MC_akstest_azurecnicluster_westeurope 65000 * VirtualNetwork None Allow * Outbound * VirtualNetwork None AllowInternetOutBound MC_akstest_azurecnicluster_westeurope 65001 * * None Allow * Outbound * Internet None DenyAllOutBound MC_akstest_azurecnicluster_westeurope 65500 * * None Deny * Outbound * * None
If you look at the previous rules closely, you will notice two things:
- Outbound connectivity to the public Internet is open (rule “AllowInternetOutbound”): this is required for multiple communications patterns, such as getting images from Container Registries, talking to the different Azure APIs, or to the master nodes. Closing outbound connectivity from an AKS cluster is likely to break something, if you are not 100% sure of what you are doing.
- Inbound connectivity from the public Internet is completely closed (rule “DenyAllInbound”): this improves substantially the security posture of our cluster. Inbound connectivity will be automatically modified when deploying services, as we will see later.
Deploying some pods
Let us deploy our first set of pods. This is the manifest I am using. As image I will use the test app from the Kubernetes Up and Running book (see this Github repo for more information). My most sincere thanks to that team for this!:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: kuard-vnode spec: replicas: 2 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 5 template: metadata: labels: app: kuard-vnode spec: containers: - name: kuard-vnode image: gcr.io/kuar-demo/kuard-amd64:blue ports: - containerPort: 8080 resources: requests: cpu: "0.5" tolerations: - key: virtual-kubelet.io/provider operator: Exists - effect: NoSchedule key: azure.com/aci affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: type operator: NotIn values: - virtual-kubelet --- apiVersion: v1 kind: Service metadata: name: kuard-vnode spec: type: LoadBalancer ports: - port: 80 selector: app: kuard-vnode
We will go over the tolerations and node affinities in later posts in this blog series. For now, let’s focus on the service. After a while, you should see a LoadBalancer-type service with a public external IP address, which automatically configures a rule in our NSG:
$ k get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kuard-vnode LoadBalancer 10.0.236.245 13.80.27.137 8080:30610/TCP 10m kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 8h $ az network nsg rule list -g $noderg_azure --nsg-name $nsgname_azure -o table Name ResourceGroup Priority SourcePortRanges SourceAddressPrefixes SourceASG Access Protocol Direction DestinationPortRanges DestinationAddressPrefixes DestinationASG -------------------------------------------------- ------------------------------------- ---------- ------------------ ----------------------- ----------- -------- ---------- ----------- ----------------------- ---------------------------- ---------------- a094204ca555d11e9b1619a6af760136-TCP-8080-Internet MC_akstest_azurecnicluster_westeurope 500 * Internet None Allow Tcp Inbound 8080 13.80.27.137 None
If you check connectivity to that IP, it should work on port 8080 (or whatever other port you configured in your LoadBalancer service), traversing the Network Security Group attached to the NIC of the AKS nodes.
[…] Part 4: NSGs with Azure CNI clusters […]
LikeLike
[…] Part 4: NSGs with Azure CNI cluster […]
LikeLike
[…] Part 4: NSGs with Azure CNI cluster […]
LikeLike
[…] Part 4: NSGs with Azure CNI cluster […]
LikeLike
[…] Part 4: NSGs with Azure CNI cluster […]
LikeLike
This is a great ppost
LikeLike
So happy it is useful Gerard!
LikeLike