Application Gateway for Containers: Istio integration (6)

This post will explore the functionality in Azure Application Gateway for Containers (AGC) to integrate with an Istio service mesh in Kubernetes. This blog is part of a series:

If you are here, you probably know what Application Gateway for Containers (AGC) is, if not, please refer to the previous posts in this series. You might not be aware about Istio though: it is a Kubernetes service mesh. Service meshes are constructs that manipulate traffic inside of Kubernetes clusters to achieve objectives such as better security, observability, or resiliency. In this post we are focusing on security, more concretely, on the capability of service meshes to encrypt traffic, even if your application has no encryption functionality. Specifically for Istio, integration between AGC and the Istio service mesh has been recently released: https://aka.ms/agc/istio (in preview).

As in the other posts in this series, I am testing with the API component of the YADA application (Yet Another Demo App). This application has no TLS support, so I will rely on Istio to provide end-to-end encryption.

What are we trying to achieve?

Essentially, to encrypt traffic between the AGC and the application pod with TLS. This would also be possible by enhancing the application to support TLS, configuring backend TLS policies in AGC and creating Kubernetes secrets containing the digital certificates. However, let’s assume that we are lazy today (in my case always a fair assumption) and that we want to do it the easy way. Or in other words, let’s leave the heavy lifting to Istio.

Instead of enhancing the application with TLS support, Istio will inject a “sidecar” container in the application pod that will take care of TLS termination. Istio will also generate digital certificates for encryption, and an extra component of AGC called the “Istio extension” will take care about configuring AGC accordingly. This diagram summarizes the traffic flow we want to have:

Sidecar injection

Service meshes have traditionally worked by injecting extra containers called “sidecars” in application pods. Remember that even if many people make the mental association “pod = container”, a pod can host multiple containers. Newer service meshes can work “sidecar-less”, and Istio has also what is called “Ambient mode”, which doesn’t require sidecar containers. However, AGC integrates with the “Sidecar mode” of Istio community edition, so let’s focus on that. Note that integration does NOT work at this point with AKS Istio add-on.

Deploying Istio in your AKS cluster is extremely easy, as the Istio installation guide shows. It is literally three commands (my istio_version variable is initialized to 1.27.1, which is the most recent Istio stable version at the time of this writing):

curl -L https://istio.io/downloadIstio | ISTIO_VERSION=$istio_version sh -
export PATH=$PATH:$PWD/istio-$istio_version/bin
istioctl install --set profile=default -y

A fourth command will label the application namespace to enable sidecar injection:

kubectl label namespace $app_ns istio-injection=enabled

Just by labeling the namespace like that, you should see that there are now two containers in the application pods (the 2/2 in the output below). You might have to restart the deployment if the pods already existed before you applied the label to the namespace:

❯ kubectl get pod -n $app_ns
NAME                       READY   STATUS    RESTARTS   AGE
yadaapi-78c786bb9b-s2vwc   2/2     Running   0          8m47s

Configuring a strict TLS policy

We can now configure our application to accept TLS traffic. “Strict” in this case means that non-TLS traffic will be rejected. You do this with an Istio resource called PeerAuthentication that can get deployed at different scopes. In our case, I will deploy it at the namespace level to make things easy:

❯ kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: $app_ns
spec:
  mtls:
    mode: STRICT
EOF

Installing the AGC extension

Installing the AGC extension is similar to installing the AGC controller, you do it via a Helm chart. You should use the same version as for the AGC controller (the first version with official Istio support is 1.8.12, as documented in https://aka.ms/agc/istio:

echo "Installing service mesh extension for ALB via Helm..."
helm install alb-controller-servicemesh-extension oci://mcr.microsoft.com/application-lb/charts/alb-controller-servicemesh-extension \
     --namespace $alb_deployment_ns \
     --version $alb_controller_version

The extension will create two pods in another namespace, that will be also labeled for Istio sidecar injection:

kubectl get pod -n alb
NAME                                              READY   STATUS    RESTARTS   AGE
alb-controller-istio-extension-595b66cf7f-4cccf   2/2     Running   0          8h
alb-controller-istio-extension-595b66cf7f-ld6x2   2/2     Running   0          8h

Are we done yet?

Actually, yes! Let’s do a quick test with the headers endpoint of the API:

curl -k "https://${fqdn}/api/headers"
{
  "Accept": "*/*",
  "Host": "gyb8gbddf6hqbud9.fz50.alb.azure.com",
  "User-Agent": "curl/7.68.0",
  "X-Agc-Expected-Rq-Timeout-Ms": "60000",
  "X-Forwarded-For": "93.104.175.37",
  "X-Forwarded-Proto": "https",
  "X-Request-Id": "9afaae4c-a506-401b-baca-8be40bdb1bae"
}

Alright, the application is working. But how do we make sure that the traffic is encrypted as we intended? We can have a look at the logs in the AGC Istio extension pods to make sure that there are no error messages:

$ istio_extension_ns=alb
$ istio_extension_pods=$(kubectl get pods -n $istio_extension_ns --no-headers | awk '{print $1}')
$ echo "$istio_extension_pods" | while IFS= read -r pod; do kubectl logs $pod -n $istio_extension_ns | jq -r '[.Timestamp, .message] | @tsv'; done
2025-10-14T06:49:38.072900901Z  Starting ALB Istio Extension 1.8.12
2025-10-14T06:49:38.073509012Z  attempting to acquire leader lease alb/alb-istio-extension-leader-election...
2025-10-14T06:49:38.074268625Z  error retrieving resource lock alb/alb-istio-extension-leader-election: Get "https://10.0.0.1:443/apis/coordination.k8s.io/v1/namespaces/alb/leases/alb-istio-extension-leader-election?timeout=7.5s": dial tcp 10.0.0.1:443: connect: connection refused
2025-10-14T06:49:41.092272469Z  successfully acquired lease alb/alb-istio-extension-leader-election
2025-10-14T06:49:41.092503473Z  alb-controller-istio-extension-595b66cf7f-ld6x2_8aae260e-8114-4ac5-812d-184aa09284a3 became leader                                                                                                          2025-10-14T06:49:41.092964881Z  Starting config watcher
2025-10-14T06:49:41.093237186Z  Starting EventSource
2025-10-14T06:49:41.546139921Z  Starting Controller
2025-10-14T06:49:41.546163521Z  Starting workers
2025-10-14T06:49:41.546460027Z  Reconciling
2025-10-14T06:49:41.546475727Z  Reconciling configmap: istio-system/istio
2025-10-14T06:49:41.546478427Z  Reconciling configmap: istio-system/istio
2025-10-14T06:49:41.546683431Z  Updated secret with trust domain: cluster.local and aliases: []                                                                                                                                             2025-10-14T06:49:41.547027937Z  Request Body
2025-10-14T06:49:41.557461822Z  Response Body
2025-10-14T06:49:41.557733227Z  Successfully updated secret with new trust domain: cluster.local
2025-10-14T06:49:41.557746027Z  Reconcile successful                                                                                                                                                                                        2025-10-14T06:49:42.618032438Z  Updated secret: alb/alb-gateway-client-certificate-istio

The error message error retrieving resource lock is normal, it is part of the leader election process between the two pods.

Alright, so all looks normal and no errors, but how can we REALLY tell whether traffic is encrypted or not? VNet Flow Logs I hear you say? Let’s have a look at the traffic between the AGC and our application pod:

Only port 8080, even if you might be expecting something like 443. Still, it could be encrypted, but unfortunately VNet Flow Logs do not look into the packet payload, so nothing else to look here. Move along…

Let’s go for the heavy troubleshooting machinery, and take a tcpdump inside of the application pod (the YADA API container is based on an Ubuntu image for easy installation of additional tooling). We will start looking for GET messages, regardless the port, source or destination IP addresses. Tcpdump can be used for that, provided you tell it exactly where to look for:

root@yadaapi-78c786bb9b-9j7t6:/app# tcpdump -n -i any -s 0 -A 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420'
13:28:28.119751 IP 127.0.0.6.38665 > 10.244.0.6.8080: Flags [P.], seq 3948249403:3948249503, ack 332215499, win 512, options [nop,nop,TS val 3698517783 ecr 212115265], length 100: HTTP: GET /api/healthcheck HTTP/1.1                     
E....y@.@.#.....
....    ...U.;..4............                                                                                                                                                                                                               .r.....AGET /api/healthcheck HTTP/1.1
host: contoso.com  

Here what the flags in the tcpdump command mean:

  • -n: do not translate IP addresses.
  • -i any: capture in all interfaces.
  • -s 0: capture all packet payload (actually the first 262,144 bytes, to be pedantic).
  • tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420: look for the 4-byte sequence 47455420 (ASCII codes for the string “GET “) in the TCP payload.
    If you are interested on how this expression works, (tcp[12:1] & 0xf0) >> 2 gets the data offset from the TCP header (the 4 first bits of the 13th byte in the header, which indicate where the payload start). From that offset, 4 additional bytes are taken to compare.

As interesting as the tcpdump flags may be, this is not why we are here. I would like to draw your attention to the source of this packet: 127.0.0.6. This is not a packet coming from the AGC, but actually a packet coming from a loopback IP address in the pod itself! What you are seeing here is the decrypted packet that the sidecar container is sending to the application container.

This is the first sign that the sidecar setup is doing its job. To be completely sure, we can take a capture of the actual traffic coming from the AGC, which in my lab is deployed in the subnet 13.10.100.0/24, and decode it with Wireshark. I captured traffic with tcpdump, exported to a local file in the pod, and then transferred it via SCP to a machine outside of the cluster.

Using a packet analyzer such as Wireshark you can see that the TCP payload is now jibberish (encrypted). You can use the function “Decode as” to instruct Wireshark to treat traffic on port 8080 as TLS, which will show the packets between AGC and the application pod are actually TLS 1.2:

So that’s it! AGC will encapsulate packets in TLS using the same destination port as the unencrypted application version, and we didn’t have to create TLS certificates, backend TLS policy or anything of the like: Istio and AGC did it all for us!

Conclusion

If your application doesn’t support HTTPS natively, using Istio Sidecar mode an Application Gateway for Containers could be an easy way to enable end-to-end encryption to substantially increase the application’s security posture.

7 thoughts on “Application Gateway for Containers: Istio integration (6)

  1. 박정훈's avatar박정훈

    Wow. I hope this isn’t the last blog in the series. It was a really fun to read and very helpful. Especially with the recent announcement of the Nginx Ingress project’s discontinuation, as I understand many customers are considering switching to an app gateway for containers.

    I have two concerns: first, whether frontend scale-out managed by the infrastructure, can handle sudden load spikes at certain points in time; and second, the inconvenience of managing SSL certificates compared to legacy app gateway. Have a great weekend. Thank you.

    Like

    1. Good questions! Maybe an idea for the next post in the series 🙂

      Like

Leave a reply to Application Gateway for Containers: a not-so-gentle intro (3) – Cloudtrooper Cancel reply